It's been a while since I posted about this stuff, so I thought I'd give you an update.
I've now got enough of the script language to CLR compiler written to compile the script language versions of the mono logic, fibonaci and loops benchmarks. The current compiler generates code which runs at about half the speed of the code generated by mcs. This is still about 400 times faster than the current interpreted implementation though, which is good news. I've also implemented a simple app which embeds mono and uses NPTL to test the 1 thread per script approach to concurrency. I could create NPTL threads with 32K stacks which would run the logic benchmark and managed to create 8000 threads which ran concurrently, which is plenty. I think I may have found a bug while building this application though: if garbage collection occurs while a thread is being attached the GC can abort complaining about an unknown thread. There is code to avoid this happening when a thread is created by mono, but not when one is attached. Although using NPTL gives me the level of concurrency I need, I still need to be able to transparently stop running scripts, ship them over to another server and restart them in order to deal with scripted objects crossing server boundaries. This brings me back to the issue of saving and loading of stacks, which we established is hard. The 2 possible approaches seem to be to build continuations on the heap as in PicoThreads or to write C functions which are called from the CLR code which save and restore the stack. Any idea how I'd go about implementing the C function approach? Any idea which approach is going to be easier? In the future we'd also like to have more control over scheduling so that we can allocate different fractions of script CPU time to different scripts. Once we can save and load stacks in order to migrate scripts we might be able to use that mechanism as the basis of user threading which would give us more control over script scheduling. Cheers, Jim. --- Paolo Molaro <[EMAIL PROTECTED]> wrote: > On 02/11/05 Jim Purbrick wrote: > > I think this approach generalises to the > continuation > > based approach used by PicoThreads in Java[1]: you > > need to build chains of continuations on the heap > that > > you can switch between to switch threads, > something > > that Dan Sugalski suggests is really inefficient > and > > one of the reasons you might want to use Parrot > > instead of the JVM or Mono[2]. > > Except Parrot is still not suitable to develop any > of this stuff:-) > Anyway, as I said it will be slow: with mono 2.0 > heap > allocations should become much faster, so, depending > on your timeframe it may not be so bad. > > > Fibers are cooperatively scheduled light weight > > threads in Win32[3]. The core of the approach > boils > > down to: > > > > schedule() > > { > > saveCLRLogicalThreadState(currentThreadState); > > switchOSFiber(nextFiber); > > restoreCLRLogicalThreadState(nextThreadState); > > } > > > > Which is used to keep the logical managed thread > state > > and fiber in sync. They had some problems with > > exceptions using this approach and the logical > thread > > save and restore methods maybe something that's > only > > available in Rotor. > > I don't see us implementing any of the support > needed for this, > though of corse we would accept good patches to do > it. > Personally, I don't think the complexity it > introduces is > worth it for the default mono behaviour (the good > thing about mono > is that you could write the support and use your own > build with > it enabled to run your app:-). > > > > It should be pretty safe if you inject in the > user > > code checks for a > > > global var that signals the event > > > > Couldn't I just call Thread.Suspend from the main > > thread after the timeout? > > Yes, but that coiuld lead to deadlocks, since the > script might > hold a lock that another script or your engine needs > to acquire > or worse, it may have a lock in the mono runtime. > We'll be > putting some protection against the latter issue, in > the future, but > until then, it's better to make the script call > Suspend, because > you can ensure it's in a safe place wrt your engine > (and > calling Suspend from managed code on the current > thread is safe > wrt the mono runtime state). > > > > The Suspend method is marked obsolete in 2.0, > > > because of the potential for deadlocks, but > using it > > this way should > > > be safe. > > > > Will I still be able to use this approach with > future > > versions of Mono then? > > Obsolete means the call will still work, but you'll > have > a warning when compiling (if you use a C# compiler > or the like). > > > > Since most of the scripts should terminate > within > > > the timeout, if I understood correctly, you > should > > just have a number > > > of threads created as many as the slow scripts > > > > Yes, as all bar one of the threads would be > suspended > > that presumably wouldn't cause any context > switching > > problems. The only problem would be that each > > suspended thread would still have a full OS thread > > stack, so might burn a pile of memory. > > The Mono 2.0 API already implements the calls to > specify your > own thread stack size: the minimum is 128 KB IIRC, > but this will > allow you to have 100 'slow' scripts in 13 MBs of > memory. > If that's enough for your requirements, this could > be a good approach. > > lupus > > -- > ----------------------------------------------------------------- > [EMAIL PROTECTED] > debian/rules > [EMAIL PROTECTED] Monkeys > do it better > _______________________________________________ > Mono-devel-list mailing list > Mono-devel-list@lists.ximian.com > http://lists.ximian.com/mailman/listinfo/mono-devel-list > Send instant messages to your online friends http://uk.messenger.yahoo.com _______________________________________________ Mono-devel-list mailing list Mono-devel-list@lists.ximian.com http://lists.ximian.com/mailman/listinfo/mono-devel-list