> >> Not quite. COND_WAIT takes an opaque type defined by the platform, that > >> happens to be a mutex for the pthreads based implementation. > > > It should, but it doesn't. Here's the definition: > > # define COND_WAIT(c,m) pthread_cond_wait(&c, &m) > > You are already in the POSIX specific part.
It came from thr_pthread.h, so it should be POSIX. The issue here is that it's #define COND_WAIT(c,m) instead of #define COND_WAIT(c). Every place in the code, whether it's Win32 or POSIX, is going to have to pass in a condition variable and a mutex. Just because Win32 will ignore the second parameter, that isn't going to prevent the code from creating the mutex, initializing it, and passing it in. > >> I'm not sure, if we even should support Win9{8,5}. > > > I'd be happy with simply implementing Win9x as a non-threaded > > platform. Of course, hopefully nobody will even ask... > > We'll see. But as Parrot's IO system is gonna be asynchronous in core, I > doubt that we'll support it. Obviously Parrot has to run on non-threaded platforms where the kernel threading and AIO stuff just won't work. You can still do user threads, but file IO will still block everything. > > rationale. I can understand why there would need to be a global event > > thread (timers, GC, DoD), but why would passing a message from one > > thread to another need to be serialized through a global event queue? > > The main reason for the global event queue isn't message passing. The > reason is POSIX signals. Basically you aren't allowed to do anything > serious in a signal handler, especially you aren't allowed to broadcast > a condition or something. > So I came up with that experimental code of one thread doing signals. Yes, there has to be a separate thread to get signals, and each thread needs its own event queue, but why does the process have a global event_queue? I suppose there are generic events that could be handled just by the next thread to call check_events, but that isn't what this sounds like. > > And as for IO, I see the obvious advantages of performing synchronous > > IO functions in a separate thread to make them asynchronous, but that > > sounds like the job of a worker thread pool. There are many ways to > > implement this, but serializing them all through one queue sounds like > > a bottleneck to me. > > Yes. The AIO library is doing that anyway i.e. utilizing a thread pool > for IO operations. I don't see why there needs to be a separate thread to listen for IOs to finish. Can't that be the same thread that listens for signals? That is, the IO thread just spends its whole life doing select(). If it got a signal, select() should return EINTR, so the thread could then check a flag to see which signal was raised, queue the event in the proper queue(s), and call select() again. OK, I think I understand why...the event thread is in a loop waiting for somebody to tell it that there's an event in the global event queue...which is really the part I don't get yet. > Dan did post a series of documents to the list some time ago. Sorry I'be > no exact subject, but with relevant keywords like "events" you should > find it. Yeah, I remember reading some of his discussions with Damien Neil because I think I went to school with him. Anyway, here's my first draft for a Win32 event model: As for a Win32 event model, I think I should clarify what I'm talking about when I say Win32. Win32 IS NOT: The MS Services for Unix package provides a POSIX subsystem for Windows called Interix which is completely separate from Win32 (i.e. no GUI is possible, no Win SDK calls are available). It has fork(), symlinks, pthreads, SysV IPC, POSIX signals, pttys, and maybe even AIO. This config would be compiled like any other Unix variant with its own idiosyncracies. Win32 IS PROBABLY NOT: There are various POSIX emulation layers for Win32, such as cygwin and MinGW. These provide many function calls that Unix programs expect, but only to the degree that the Win32 subsystem allows (e.g. chmod likely will not do anything sensible). Since these programs still run under the Win32 subsystem, Windows GUIs are still possible. I don't know how these will interact with my event model. Win32 IS: This is the standard Win32 API as defined by NT4.0sp6a and higher. If you want to drop support for NT4, then we go to Win2k, but don't gain much. GUI message queues in Win32 are per thread. Each thread has a message queue that is autovivified. Any window that a thread creates has its messages sent to that thread's queue. However, there is no reason that a message actually has to have an associated window. You can send any thread in any process a message, so long as the thread has had its queue autovivified and is not crossing security boundaries. All files or things that look like files can be opened for async access. For example, sockets, files, and pipes can all be async. Any read, write, lock, unlock, or ioctl call can either signal a condition var (Win32 calls them "events", and they don't have POSIX cond_var semantics) or cause an event to be queued to an IO completion port. Read and write calls also have the option to queue a callback upon completion, but since this will only run when we check for them, I don't think this is useful. An IOCompletionPort is just an object that can queue completion events. Once an async file handle has been associated with an IOCP, any IOs on that handle will cause an event to be sent to the IOCP when complete. When a thread reads an event it will block until one is available, then receive the oldest one. Any number of threads may wait at once, although the most recent waiter gets the next event to minimize context switches and cache misses. Since any thread can post an arbitrary message to an IOCP, this may serve as our main event queue. Win32 has no signals as POSIX describes them, but there is a way to specify a routine to get called asynchronously when your terminal receives a CTRL+C, CTRL+BREAK, the close box is clicked, or the system is being shut down. I don't see a need for specific event or IO threads, so I propose a pool of worker threads (starting at 1, created as needed, possibly up to N where N is somewhere around the number of CPUs). These threads all just sit around waiting on the IOCP until something happens. When an IO completes, obviously an IO completion event would be queued. When somebody hits CTRL+C, the signal handler would post a CTRL+C event to the queue. When somebody needs a procedure to be run, they post a callback event. Timers are implemented with WaitableTimer objects. One of the pool threads can set a timer to run a callback when the timer fires. This callback would do the same event dispatching that getting an IOCP event would do. If a thread wants to send an event to another thread, it would do the event dispatching itself. I don't see any reason it has to go through the main event queue. To actually perform an IO, the interpreter thread would pin the buffer to prevent it from getting GC'd and call the IO function. When the IO completes, the event will get queued to the IOCP and eventually a pool thread will dequeue it. At this point it's safe to unpin the R/W buffer. Now, depending on what the caller asked for, either the IO op is just marked as completed, or a callback event is dispatched to the thread that started the IO. The most important part for windows, of course, is how to handle Windows messages. I think that a process should just be able to install a module which will have a procedure that runs whenever check_events gets called. The thread would process all events in its queues, then call the Windows message processing functions. If anybody registered event handlers for messages, they get dispatched, otherwise Windows will handle them normally. Note that these messages (e.g. "the system is low on battery") may not be associated with a particular window. Finally, a note for JIT core writers: Win32 allows you to suspend individual threads. So when a thread gets an important signal, you can queue it, suspend the thread, adjust its stack frame or code so the next return or jump will be to check_events, and resume the thread. GNS