>>>>> "AT" == Adam Turoff <[EMAIL PROTECTED]> writes:

AT> On Tue, Oct 24, 2000 at 10:55:29AM -0400, Chaim Frenkel wrote:
>> I don't see it.
>> 
>> I would find it extremely akward to allow 
>> 
>> thread 1:    *foo = \&one_foo;
>> thread 2:    *foo = \&other_foo;
>> [...]
>> 
>> copy the &foo body to a new location.
>> replace the old &foo body with an indirection
>> 
>> (I believe this is atomic.)

AT> Actually, that shouldn't be awkward, if both threads have their own
AT> private symbol tables.

As you pointed out below, we lose the use of bytecode threading. As
all lookups need to go through the symbol table. Actually, we probably
lose any pre-compilation wins, since all function lookups need to go
through the symbol table.

AT> In any case, that's a different kind of threading.   IPC threading
AT> has nothing to do with bytecode threading (for the most part).
AT> What you describe here is IPC threading.  Larry was talking about 
AT> bytecode threading.  :-)

No, I was pointing out the interaction between the two. If you want
the two execution threads to be able to have seperate meanings for *foo,
then we need to have seperate symbol tables. If we want them shared then
we need mutexes and dynamic lookups. If we want to have shared optrees
(or threaded bytecode) we need to prevent this or find a usable workaround.

AT> Bytecode threading is a concept pioneered in Forth ~30 years ago.  Forth
AT> compiles incredibly easily into an intermediate representation.  That 
AT> intermediate representation is executed by a very tight interpreter
AT> (on the order of a few dozen instructions) that eliminates the need for 
AT> standard sub calls (push the registers on the stack, push the params
AT> onto the stack and JSR).

The interpreter is optional. There does not need to be an interpreter.
The pointers can be machine level JSR. Which removes the interpreter loop
and runs a machine speed.

AT> Forth works by passing all parameters on the data stack, so there's no
AT> explicit need to do the JSR or save registers.  "Function" calls are done
AT> by adding bytecode that simply says "continue here" where "here" is the
AT> current definition of the sub being called.  As a result, the current 
AT> definition of a function is hard-coded into be the callee when the caller
AT> is compiled. [*]

Err, from my reading there is no come from. The PC is saved on the
execution stack and restored when the inner loop exits the current
nesting level.

Actually, being able to use registers would do wonders to speed up the
calls. I vaguely recall that the sparc has some sort of register windows
and that most of the parameters can be passed in a register. 

(At this point we are at a major porting effort. But the inner loop TIL
would be the easiest to port.)

AT> The problem with 
AT>     *main::localtime = \&foo;
AT>     *foo = \&bar;
AT> when dealing with threaded bytecode is that the threading specifically 
AT> eliminates the indirection in the name of speed.  Because Perl expects
AT> this kind of re-assignment to be done dynamically, threaded bytecodes
AT> aren't a good fit without accepting a huge bunch of modifications to Perl 
AT> behavior as we know it.  (Using threaded bytecodes as an intermediate
AT> interpretation also confound optimization, since so little of the
AT> context is saved.  Intentionally.)

Actually from my reading one doesn't have to lose it entirely.
All TIL functions have some sort of header. The pointer to a TIL function
points to the first entry/pointer in the function. In the case of
real machine level code, the pointer is to a routine that actually
invokes the function. In the case of a higher level TIL function, the
pointer is to a function that nests the inner loop.

In the event of redirecting a function. This pointer can be redirected
to an appropriate routine that either fixes up the original code or
simply redirects to the new version. (And the old code can be reactivated
easily)

AT> *: Forth is an interactive development environment of sorts and
AT> the "current definition of a sub" may change over time, but the
AT> previously compiled calling functions won't be updated after the
AT> sub is redefined, unless they're recompiled to use the new definition.

The current defintion of a sub doesn't change. Only a new entry in the
dictonary (symbol table) now points at a new body. If the definition is
deleted, the old value reappears.

This is no different than what happens in postscript. One makes the
decision to either do 
        /foo { ... } def 
and take the lookup hits, or
        /foo { ... } bind def
and locks in the current meanings.

<chaim>
-- 
Chaim Frenkel                                        Nonlinear Knowledge, Inc.
[EMAIL PROTECTED]                                               +1-718-236-0183

Reply via email to