On Mon, Mar 31, 2003 at 10:50:59AM -0800, Michael G Schwern wrote:
I must write my code so each operation only takes a small fraction of time
or I must try to predict when an operation will take a long time and yield
periodicly.

Really.. why? When you still have computation to be done before you can produce your output, why yield? There are certainly scenarios where you'd want each thread to get a "fair share" of computation time, but if the output from all threads is desired, whoever is waiting for them probably won't care who gets to do computation first.


Worse, I must trust that everyone else has written their code to the above
spec and has accurately predicted when their code will take a long time.

Both this and the above can be easily solved by a timer event that forces a yield. Most synchronization issues this would introduce can probably be avoided by deferring the yield until the next "checkpoint" determined by the compiler (say, the loop iteration)


I think this is a minor problem compared to the hurdles (and overhead!) of synchronization.

Cooperative multitasking is essentially syntax sugar for an event loop.

No, since all thread state is saved. In syntax and semantics they're much closer to preemptive threads than to event loops.


We need good support at the very core of the langauge for preemptive threads. perl5 has shown what happens when you bolt them on both internally and externally. It is not something we can leave for later.

I think perl 6 will actually make it rather easy to bolt it on later. You can use fork(), let the OS handle the details, and use tied variable for sharing. I believe something already exists for this in p5 and is apparently faster than ithreads. I haven't dug into that thing though, maybe it has other problems again. No doubt you'll point 'em out for me ;-)


Cooperative multitasking, if you really want it, can be bolted on later or
provided as an alternative backend to a real threading system.

I agree it can be bolted on later, but so can preemptive threads probable. As Simon pointed out, optimizing for the common case means skipping threads altogether for now.


And I resent how you talk about non-preemptive threading as not being "real" threading. Most embedded systems use tasking/threading models without round-robin scheduling, and people who try to move applications that perform real-time tasks from MacOS 9 to MacOS X curse the preemptive multitasking the latter has.

--
Matthijs van Duin  --  May the Forth be with you!

Reply via email to