On Tue, Apr 10, 2007 at 10:52:33AM +0200, Karsten Otto wrote:
> Ok, I haven't studied this Actor model so far, let me see if I  
> understood it correctly.

Note that what I'm proposing here is an elaboration on the model -- a 
literal implementation of the model would put each actor in a separate 
*process* and compel all communication over sockets (this would enforce 
the no-shared-state aspect of actors).  However, even on modern 
operating systems, having 10,000 vobjects as 10,000 separate processes 
is not going to be efficient, so we need to allow the implementation to 
take shortcuts.

> If a message comes in from the network, you determine its target  
> vobject and method(?). You then try to bind/lock the target, and  
> execute its method in a thread you pick from a pool. If the vobject  
> is already bound, you just queue the message for later execution.  
> Once the vobject gets unbound/unlocked, you pick another thread from  
> the pool and pass the queued message to the target method. (Actually,  
> it does not matter whether the message came from the network or some  
> other actor, which is neat).

Right.  The goal is to unify message passing so that it genuinely does 
not matter if a vobject is local or remote or native code or a script.  
I'd also like to support transparent process migration, so that a 
vobject could move to another host and have its messages forwarded to 
it.  The trick is support all this AND still be efficient and easy to 
use.

> While a vobject method is executing, it can send messages to other  
> vobjects, which either bind and execute the target *in the same  
> thread* or queue a message, as before. In the latter case, you get a  
> future you can query for completion or wait on.

The purpose of allowing execution within the same thread (when the 
target vobject is unbound) is that usually the actual method handler 
code that needs to be executed is either small and quick to execute, or 
the caller is likely to block anyway (in the case of reading a property) 
so there is less overhead if we just service the request immediately.

For the cases where a call is likely to start a long-running process, it 
is the responsibility of the programmer (of the method handler 
implementation, not the caller) to mark these methods as such so the 
system knows to spin them off into a separate thread.

> Assuming I got it right so far, I wonder what happens if you decide  
> to wait on your future. When you do this, you block the current  
> thread, which nevertheless still holds the locks of  / is still bound  
> to one or more vobjects. In other words, the deadlock just waits  
> around the corner - e.g. two vobjects with separate threads and  
> futures for each other. Or does waiting automatically release the  
> blocks/bindings? In that case, what happens once the thread unblocks?  
> And can another thread bind/lock the vobject in the meantime?

This is a good point; blocking makes everything more difficult and when 
I was writing about it originally hesitant to include it at all.  Now I 
see my instinct was correct.  The two solutions I see are:

 - Disallow blocking, and require a continuation passing style where 
futures+callbacks are chained together and returned up the stack to the 
main event loop, at which point it is added to the scheduler to be 
executed when the commitment is either satisfied or fails.  

 - User-level threading on top of OS threads.  A "blocking call" would 
save the stack and return to the main event loop (using swapcontext() or 
the Windows equivalent), at which point the (user-level) thread is added 
to the scheduler to be resumed when the commitment is either satisfied 
or fails.  If this sounds similar to the first case, that's because it 
is.  The key difference here is that it requires less work on the part 
of the user (less boilerplate, code doesn't have to be split up across 
methods).  The disadvantage is that users may be less aware of what is 
going on, and that other methods could be called while their handler is 
blocked.

In either case, when a vobject chooses to wait for a result, that 
vobject and all the vobjects in its call stack would be permitted to 
service other requests.  This solves the problem you present: the 2nd 
vobject in the 2nd thread is eventually able to call the 1st vobject, 
because the 1st vobject has yielded control and can handle new requests.

I should also note these two options arn't incompatible, so there's 
likely no reason we can't have both continuations and user-level 
threads.  My preference is towards cooperative threads, however, 
unfortunately portability is going to be difficult as Linux, Windows and 
OS X all seem to use different APIs for this.  I haven't yet found a 
library that abstracts it across operating systems in the same way that 
has been done for OS threads.  Most "portable user-level thread 
libraries" I have found actually use setjmp/longjmp, which doesn't 
preserve the stack.

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]

Attachment: signature.asc
Description: Digital signature

_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to