Author: allison
Date: Tue Apr 11 13:28:33 2006
New Revision: 12178

Added:
   trunk/docs/pdds/clip/pddXX_events.pod
   trunk/docs/pdds/clip/pddXX_threads.pod

Changes in other areas also in this revision:
Modified:
   trunk/   (props changed)

Log:
Very, very early drafts of the threads and events PDDs.

Added: trunk/docs/pdds/clip/pddXX_events.pod
==============================================================================
--- (empty file)
+++ trunk/docs/pdds/clip/pddXX_events.pod       Tue Apr 11 13:28:33 2006
@@ -0,0 +1,156 @@
+# Copyright: 2001-2006 The Perl Foundation.
+# $Id: $
+
+=head1 NAME
+
+docs/pdds/clip/pddXX_events.pod - Parrot Events
+
+=head1 ABSTRACT
+
+This document defines the requirements and implementation strategy for
+Parrot's event subsystem.
+
+=head1 VERSION
+
+$Revision: $
+
+=head1 DESCRIPTION
+
+Description of the subject.
+
+=head1 DEFINITIONS
+
+Definitions of important terms. (optional)
+
+=head1 IMPLEMENTATION
+
+[Excerpt from Perl 6 and Parrot Essentials to seed discussion.]
+
+An event is a notification that something has happened: the user has
+manipulated a GUI element, an I/O request has completed, a signal has
+been triggered, or a timer has expired.  Most systems these days have an
+event handler (often two or three, which is something of a problem),
+because handling events is so fundamental to modern GUI programming.
+Unfortunately, the event handling system is not integrated, or poorly
+integrated, with the I/O system, leading to nasty code and unpleasant
+workarounds to try and make a program responsive to network, file, and
+GUI events simultaneously. Parrot presents a unified event handling
+system, integrated with its I/O system, which makes it possible to write
+cross-platform programs that work well in a complex environment.
+
+Parrot's events are fairly simple. An event has an event type, some
+event data, an event handler, and a priority. Each thread has an event
+queue, and when an event happens it's put into the right thread's
+queue (or the default thread queue in those cases where we can't tell
+which thread an event was destined for) to wait for something to
+process it.
+
+Any operation that would potentially block drains the event queue
+while it waits, as do a number of the cleanup opcodes that Parrot uses
+to tidy up on scope exit. Parrot doesn't check each opcode for an
+outstanding event for pure performance reasons, as that check gets
+expensive quickly. Still, Parrot generally ensures timely event
+handling, and events shouldn't sit in a queue for more than a few
+milliseconds unless event handling has been explicitly disabled.
+
+When Parrot does extract an event from the event queue, it calls that
+event's event handler, if it has one. If an event doesn't have a
+handler, Parrot instead looks for a generic handler for the event type
+and calls it instead. If for some reason there's no handler for the
+event type, Parrot falls back to the generic event handler, which
+throws an exception when it gets an event it doesn't know how to
+handle.  You can override the generic event handler if you want Parrot
+to do something else with unhandled events, perhaps silently
+discarding them instead.
+
+Because events are handled in mainline code, they don't have the
+restrictions commonly associated with interrupt-level code. It's safe
+and acceptable for an event handler to throw an exception, allocate
+memory, or manipulate thread or global state safely. Event handlers
+can even acquire locks if they need to, though it's not a good idea to
+have an event handler blocking on lock acquisition.
+
+Parrot uses the priority on events for two purposes. First, the
+priority is used to order the events in the event queue. Events for a
+particular priority are handled in a FIFO manner, but higher-priority
+events are always handled before lower-priority events. Parrot also
+allows a user program or event handler to set a minimum event priority
+that it will handle. If an event with a priority lower than the
+current minimum arrives, it won't be handled, instead sitting in the
+queue until the minimum priority level is dropped. This allows an
+event handler that's dealing with a high-priority event to ignore
+lower-priority events.
+
+User code generally doesn't need to deal with prioritized events, so
+programmers should adjust event priorities with care. Adjusting the
+default priority of an event, or adjusting the current minimum
+priority level, is a rare occurrence.  It's almost always a mistake to
+change them, but the capability is there for those rare occasions
+where it's the correct thing to
+do.
+
+=head2 Signals
+
+Signals are a special form of event, based on the Unix signal mechanism.
+Parrot presents them as mildly special, as a remnant of Perl's Unix
+heritage, but under the hood they're not treated any differently from
+any other event.
+
+The Unix signaling mechanism is something of a mash, having been
+extended and worked on over the years by a small legion of undergrad
+programmers. At this point, signals can be divided into two
+categories, those that are fatal, and those that aren't.
+
+Fatal signals are things like 
+SIGKILL, which unconditionally kills a process, or SIGSEGV, which
+indicates that the process has tried to access memory that isn't part
+of your process.  There's no good way for Parrot to catch these
+signals, so they remain fatal and will kill your process.  On some
+systems it's possible to catch some of the fatal signals, but
+Parrot code itself operates at too high a level for a user program to
+do anything with them--they must be handled with special-purpose code
+written in C or some other low-level language.  Parrot itself may
+catch them in special circumstances for its own use, but that's an
+implementation detail that isn't exposed to a user program.
+
+Non-fatal signals are things like SIGCHLD, indicating that a
+child process has died, or SIGINT, indicating that the user
+has hit C<^C> on the keyboard. Parrot turns these signals into events
+and puts them in the event queue.  Your program's event handler for the
+signal will be called as soon as Parrot gets to the event in the queue,
+and your code can do what it needs to with it.
+
+SIGALRM, the timer expiration signal, is treated specially by
+Parrot. Generated by an expiring alarm() system call, this signal is
+normally used to provide timeouts for system calls that would
+otherwise block forever, which is very useful. The big downside to
+this is that on most systems there can only be one outstanding
+alarm() request, and while you can get around this somewhat with the
+setitimer call (which allows up to three pending alarms) it's still
+quite limited.
+
+Since Parrot's IO system is fully asynchronous and never blocks--even
+what looks like a blocking request still drains the event queue--the
+alarm signal isn't needed for this. Parrot instead grabs SIGALRM for
+its own use, and provides a fully generic timer system which allows
+any number of timer events, each with their own callback functions
+and private data, to be outstanding.
+
+=head1 ATTACHMENTS
+
+None.
+
+=head1 FOOTNOTES
+
+None.
+
+=head1 REFERENCES
+
+None.
+
+=cut
+
+__END__
+Local Variables:
+  fill-column:78
+End:

Added: trunk/docs/pdds/clip/pddXX_threads.pod
==============================================================================
--- (empty file)
+++ trunk/docs/pdds/clip/pddXX_threads.pod      Tue Apr 11 13:28:33 2006
@@ -0,0 +1,134 @@
+# Copyright: 2001-2006 The Perl Foundation.
+# $Id: $
+
+=head1 NAME
+
+docs/pdds/clip/pddXX_threads.pod - Parrot Threads
+
+=head1 ABSTRACT
+
+This document defines the requirements and implementation strategy for
+Parrot's threading model.
+
+=head1 VERSION
+
+$Revision: $
+
+=head1 DEFINITIONS
+
+Concurrency
+
+=head1 DESCRIPTION
+
+Description of the subject.
+
+=head1 IMPLEMENTATION
+
+[Excerpt from Perl 6 and Parrot Essentials to seed discussion.]
+
+Threads are a means of splitting a process into multiple pieces that
+execute simultaneously.  It's a relatively easy way to get some
+parallelism without too much work. Threads don't solve all the
+parallelism problems your program may have. Sometimes multiple
+processes on a single system, multiple processes on a cluster, or
+processes on multiple separate systems are better. But threads do
+present a good solution for many common cases.
+
+All the resources in a threaded process are shared between threads.
+This is simultaneously the great strength and great weakness of
+threads. Easy sharing is fast sharing, making it far faster to
+exchange data between threads or access shared global data than to
+share data between processes on a single system or on multiple
+systems. Easy sharing is dangerous, though, since without some sort of
+coordination between threads it's easy to corrupt that shared data.
+And, because all the threads are contained within a single process, if
+any one of them fails for some reason the entire process, with all its
+threads, dies.
+
+With a low-level language such as C, these issues are manageable. The
+core data types, integers, floats, and pointers are all small enough
+to be handled atomically. Composite data can be protected with
+mutexes, special structures that a thread can get exclusive access to.
+The composite data elements that need protecting can each have a mutex
+associated with them, and when a thread needs to touch the data it
+just acquires the mutex first. By default there's very little data
+that must be shared between threads, so it's relatively easy, barring
+program errors, to write thread-safe code if a little thought is given
+to the program structure.
+
+Things aren't this easy for Parrot, unfortunately. A PMC, Parrot's
+native data type, is a complex structure, so we can't count on the
+hardware to provide us atomic access. That means Parrot has to provide
+atomicity itself, which is expensive. Getting and releasing a mutex
+isn't really that expensive in itself. It has been heavily optimized by
+platform vendors because they want threaded code to run quickly. It's
+not free, though, and when you consider that running flat-out Parrot
+does one PMC operation per 100 CPU cycles, even adding an additional 10
+cycles per operation can slow Parrot down by 10%.
+
+For any threading scheme, it's important that your program isn't
+hindered by the platform and libraries it uses. This is a common
+problem with writing threaded code in C, for example. Many libraries
+you might use aren't thread-safe, and if you aren't careful with them
+your program will crash. While we can't make low-level libraries any
+safer, we can make sure that Parrot itself won't be a danger. There is
+very little data shared between Parrot interpreters and threads, and
+access to all the shared data is done with coordinating mutexes. This
+is invisible to your program, and just makes sure that Parrot itself
+is thread-safe.
+
+When you think about it, there are really three different threading
+models. In the first one, multiple threads have no interaction among
+themselves. This essentially does with threads the same thing that's
+done with processes. This works very well in Parrot, with the
+isolation between interpreters helping to reduce the overhead of this
+scheme. There's no possibility of data sharing at the user level, so
+there's no need to lock anything.
+
+In the second threading model, multiple threads run and pass messages
+back and forth between each other. Parrot supports this as well, via
+the event mechanism. The event queues are thread-safe, so one thread
+can safely inject an event into another thread's event queue. This is
+similar to a multiple-process model of programming, except that
+communication between threads is much faster, and it's easier to pass
+around structured data.
+
+In the third threading model, multiple threads run and share data
+between themselves. While Parrot can't guarantee that data at the user
+level remains consistent, it can make sure that access to shared data
+is at least safe. We do this with two mechanisms.
+
+First, Parrot presents an advisory lock system to user code. Any piece
+of user code running in a thread can lock a variable. Any attempt to
+lock a variable that another thread has locked will block until the
+lock is released. Locking a variable only blocks other lock attempts.
+It does I<not> block plain access. This may seem odd, but it's the
+same scheme used by threading systems that obey the POSIX thread
+standard, and has been well tested in practice.
+
+Secondly, Parrot forces all shared PMCs to be marked as such, and all
+access to shared PMCs must first acquire that PMC's private lock. This
+is done by installing an alternate vtable for shared PMCs, one that
+acquires locks on all its parameters. These locks are held only for
+the duration of the vtable function, but ensure that the PMCs affected
+by the operation aren't altered by another thread while the vtable
+function is in progress.
+
+=head1 ATTACHMENTS
+
+None.
+
+=head1 FOOTNOTES
+
+None.
+
+=head1 REFERENCES
+
+None.
+
+=cut
+
+__END__
+Local Variables:
+  fill-column:78
+End:

Reply via email to