From: Leopold Toetsch <[EMAIL PROTECTED]>
   Date: Mon, 13 Nov 2006 21:19:26 +0100

   Am Montag, 13. November 2006 03:56 schrieb Bob Rogers:
   > +There are two techniques for implementing dynamic binding. ?These are
   > +traditionally called deep binding and shallow binding [2].

   Can you please consider the impacts of a third variant using STM, which is 
   already implemented . . .

   Thanks,
   leo

Well, I see that STM transactions nest, but they don't seem to roll back
in case of an error.  In the test case below, you can see that the
innermost temporary binding is not undone when control transfers to the
error handler.  In this particular case I could put the "stm_abort" in
the error handler, but in general I wouldn't know how many nested
transactions to unwind.  I'm sure it would work to put an "stm_abort" in
a pushaction sub wherever I do "stm_start", but if I was going to go
that way, I might as well put "set_global" there instead; that would
also save me the trouble of deciding whether I needed to change the
stored PMC to an STMRef.  But either way, full continuations (and
therefore coroutines) would still not work properly, as the actions
would be run repeatedly.  Or am I missing something about STM?

   In general, I would expect invoking a continuation to restore the
dynamic bindings in effect when the continuation was taken.  For an
example, see the "coroutines, GC" test case added to t/op/globals.t by
the patch; there are two coroutines which each bind the same variable to
different values, but yielding from one and re-invoking the other
flip-flops between the same two bindings.  Note that there are still
only two bindings; changes to the binding made in one environment would
persist until the next cycle.  I don't have any language implementation
models to cite, but this behavior strikes me as Clearly The Right Thing:
Dynamic bindings should be visible only from the point of binding and
downward.

   IIUC, stm_start and stm_abort assume a single linear series of
changes for a given interpreter, where all 'start ... commit/abort'
pairs nest properly.  If so, this works for stack-based interpreters
(and could be made to work for limited continuations that are restricted
to returning down the stack), but not for full continuations.  If STM
transactions were made part of the dynamic environment, then that would
be a different story, but in that case STM transaction nesting would be
nonlinear, and I have no clue whether doing so would require a small
change to the STM code or a major rewrite.  Or it might break STM in
other ways; I do not at all grok STM (and reading the code is not going
to get me there in a hurry).

   Or perhaps we should give up on full continuations, since STM seems
to be yet another feature that breaks/is broken by them.  Seriously.

                                        -- Bob

------------------------------------------------------------------------
[EMAIL PROTECTED]> cat hacks/stm3.pir 

.sub main :main
        push_eh handler
        $P0 = new .Integer
        $P0 = 42
        $P1 = new .STMRef, $P0    # localalize $P0
        stm_start                 # 2nd line 
        $P1 = 45
        print $P1
        print "\n"
        foo($P1)
        print $P1
        print "\n"
        stm_abort                 # un-local ;)
        print $P1
        print "\n"
        end
handler:
        .local pmc e
        .local string s
        .get_results (e, s)
        print "Error: "
        print s
        print "\n"
        print $P1
        print "\n"       
.end

.sub foo
        .param pmc var
        print "in foo\n"
        stm_start                 # 2nd level 
        var = 49
        print var
        print "\n"
        ## oops; we got an error.
        $P3 = new .Exception
        $P3[0] = 'oops'
        throw $P3

        print "leaving foo normally\n"
        stm_abort                 # un-local ;)
.end
[EMAIL PROTECTED]> ./parrot hacks/stm3.pir 
45
in foo
49
Error: oops
49
[EMAIL PROTECTED]> 

Reply via email to