From: "Ben Tilly" <[EMAIL PROTECTED]>
   Date: Thu, 26 Oct 2006 19:36:36 -0700

   On 10/26/06, Bob Rogers <[EMAIL PROTECTED]> wrote:
   >    From: "Ben Tilly" <[EMAIL PROTECTED]>
   >    Date: Thu, 26 Oct 2006 17:09:35 -0700
   >
   >    [...]
   >
   >    While I agree that continuations are insanely powerful, I likewise
   >    prefer generators and iterators to continuations.  Why?  Well suppose
   >    that you write some complex code using continuations.  Suppose that
   >    something goes wrong.  Now you want to debug it.  So you'd want some
   >    useful debugging information.  Something like, say, a stack backtrace.
   >
   >    You're outta luck.
   >
   > That is not true -- or, at least, not necessarily true.  In a full CPS
   > language like Scheme, tail-merging (or the equivalent CP transformation)
   > is obligatory, so you may in fact be of luck.  (I've never used Scheme,
   > so I don't know how a real Schemer goes about debugging.)  However, a
   > Scheme continuation looks just like a closure at the source level.  It
   > is possible to write nearly identical code in Common Lisp that uses
   > closures, and since CL does not tail-merge by default, you get the same
   > functionality plus useful backtraces in the debugger.

   Ummm...you've mixed three unrelated things together.

   1. Continuations.
   2. Closures.
   3. Tail call elimination.

"Unrelated"??  I don't think so.

   Taking them in reverse order, here is what they are:

   3. Tail call elimination is the process of eliminating call frames
   that have nothing left to do except return a later call.  So in other
   words if foo ends with return bar(), then one can wipe out foo's call
   frame and replace it with bar.

Mostly true, but you also have to consider dynamic environment.  For
example, if (in Perl 5) you have

        sub foo {
            local $arg = shift;

            return bar($arg+1);
        }

the dynamic binding of $arg means that foo has some cleanup to do after
bar returns, so the call to bar is not a tail call.  If foo used "my"
instead of "local" (which probably makes more sense in any case), then
the call to bar would indeed be a tail call.

   Tail call elimination can make some recursive algorithms as efficient
   as looping.  At the expense of losing potentially useful debugging
   information.

In reducing the stack requirement to that of a loop, you also reduce the
amount of debugging information to that of a loop.  But what "useful"
information about the previous iterations of a loop would you want to
keep?  Are you saying that loops are therefore intrinsically harder to
debug than (non-optimized) recursions?

   2. Closures are anonymous functions that carry their own environment
   (ie variables that were in lexical scope when the function was
   created).  I love closures.  I'm very glad that Perl 5 has them.

Agreed!  (Except that they need not be anonymous; that's just an
unfortunate quirk of Perl 5.)

   1. Continuations are a representation of the execution state of a
   program at a given time.

The definition I'm accustomed to is much broader.  A continuation is a
funcallable object that represents of the *rest* of the computation.
Most, if not all, of our disagreement is due to this difference.

   FWIW, Wikipedia [1] agrees with you, but I think this definition is
too limited, in that it defines "the rest of the computation" as the
resumption of some old context.  In short, it leaves out closures as an
implementation mechanism.

   . . .

   If that didn't make any sense, let me make it simpler.  A language
   like Common Lisp can be implemented with all of the function calls
   represented on a stack.  And therefore when you ask for a stack
   backtrace, there is something to give you.  However continuations
   introduce too much complexity to be implemented with a stack, and
   therefore it is impossible to show what is going on with something as
   simple as a stack backtrace.

Parrot has both continuations and backtraces.  Each context (aka
activation record) has a "return continuation" to the calling context,
so Parrot follows the chain of continuations back to the beginning [2].
Of course, you can use continuations to jump between contexts that are
unrelated, as when using coroutines, and in such cases the backtrace
won't show these jumps.  But if you *are* using coroutines, then you
should have known the job was dangerous when you took it.

   Ironically, I believe that a Parrot continuation (which is only
glancingly related to a closure) is more or less what you mean when you
say "continuation."

   While the Scheme continuations may look similar to closures to you,
   they're NOT similar to closures, and the implications are VERY
   different from closures.

That is true only for the limited definition of continuation you propose
above, and not even generally true of Scheme.  Allow me to quote from a
post I wrote last summer [3] that was part of a discussion of exception
handling in Parrot.  I was responding to a question by Chip Salzenberg
about why I thought a Scheme compiler for Parrot would not need Parrot
continuations:

        . . . I believe that Scheme has a different idea of continuation
        than Parrot.  In brief, if you rewrite call-and-return into pure
        continuation-passing calling [4], it looks like a call into the
        sub followed by another call to the continuation.  All of these
        can in fact be implemented as tail calls, since there are no
        returns, and Scheme stipulates that you *must* tail-merge where
        possible.  This means you don't need a stack -- there is never
        more than one activation record at any given time -- so all of
        the state required for a continuation can be captured by a
        closure.  In the Lisp community generally, "closure" and
        "continuation" are therefore often used interchangeably, though
        this does sweep some distinctions under the rug.

           So all a CPS language really needs is support for closures.
        Such an implementation is truly and utterly stackless, which
        means that dynamic-wind needs to keep its own stack explicitly,
        and similarly for dynamic binding (which, IIUC, is generally
        implemented in terms of dynamic-wind).

This is also the prevailing terminology in Common Lisp, despite the
difference in semantics.  I often see CL programmers label functional
arguments "cont" or "continuation" when those functions are only used to
return value(s) in a straightforward way.  For instance, here is some CL
I wrote [5], vintage 2000:

        (defmethod map-parts ((def has-parts-mixin) continuation)
          (dolist (part (def-parts def))
            (funcall continuation part)))

        (defmethod map-extent-parts ((def has-parts-mixin) x1 y1 x2 y2 
continuation)
          (dolist (part (def-parts def))
            (when (bbox-overlaps-extent? part x1 y1 x2 y2)
              (funcall continuation part))))

The point of these abstractions, BTW, is to allow for the possibility of
a more elaborate class that replaces the simple list of parts with a
geometric search data structure, such as a quadtree.  If that happens,
the execution trace can be quite elaborate, though it is still all
stack-based.

   Note that this common usage is legitimate under the "rest of the
computation" definition, but not the "goto execution state" definition.
And please remember that this is not just my idiom; I think I started
seeing this sort of thing in the Lisp Machine source code about 20 years
ago.

   >    Of course, you may also blow out the stack if you try to write
   > Scheme-style loops without tail-merging.  But Common Lisp allows you to
   > enable tail-merging selectively, as a compiler optimization.

   I believe that "tail-merging" is usually called "tail-call
   elimination."  If you mean something else, then please explain.

I've also heard "tail-call optimization" used.  AFAIK, all terms are
current in the Lisp compiler community, but since Lispers use them more
often than the rest of the world, we generally gravitate towards the
Huffman-coded label.  ;-}  I apologize if this is a source of confusion.

   >    I also believe (from having written complex CL applications) that the
   > use of functional arguments (closures, continuations, whatever) reduces
   > complexity, sometimes considerably.  This is a direct consequence of
   > Graham's "succinctness = power" thesis . . .

   I agree on the power and utility of closures.  I do not agree on
   continuations.  Please note that your CL experience is of no utility
   on the question of whether continuations are good because CL does not
   have continuations.

Only per your definition.  (And your assumption that my understanding is
limited to my CL experience.)

   Furthermore this is not an accidental oversight, the CL committee
   looked at continuations, decided that continuations are Bad Things,
   and explicitly chose to leave them out.

X3J13 decided that continuations were Bad Things because X3J13 was
mostly made up of representatives from Lisp vendors who had invested
heavily in their stack-based runtime systems, and many figured they
would go broke trying to produce a heap-based runtime.  If you have
other evidence, I'd love to hear it.

   >    Abstraction.  Speed.  Debuggability.  Pick any three.  ;-}

   If you were arguing for closures, I'd agree with you.  But you're
   arguing for continuations.

   Cheers,
   Ben

I'm arguing for both, because my operational definition of continuation
includes the possibility of using closures in their implementation,
without necessarily needing hairy state-munging magic.  But there are
times (I further argue) when you need the magic, too.

                                        -- Bob

[1]  http://en.wikipedia.org/wiki/Continuation

[2]  See PDB_backtrace in http://svn.perl.org/parrot/trunk/src/debug.c

[3]  http://www.mail-archive.com/perl6-compiler@perl.org/msg01404.html

[4]  This is called "CP transformation", which is better described at
     http://en.wikipedia.org/wiki/Continuation-passing_style (though,
     oddly, not named explicitly).

[5]  http://rgrjr.dyndns.org/icancad/
 
_______________________________________________
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to