On 02-May-2001 Andrew Reilly wrote:
> I've been programming for over twenty years myself, and have
> been able to avoid the switch from imperative non-window
> graphics coding to GUI coding by moving to GUI-free embedded
> work.
A Truly Cunning Plan. :-) BTW, I should warn you that I don't do much GUI
coding these days, and most (but not all) of my GUI coding has been on Winders.
I Am Not An Expert.
> I'm pretty sure that I understand the event model of doing
> things. It's not that dissimilar to interrupt handler code in
> my embedded work.
Absolutely.
> The thing that's making me balk at first is understanding where
> the events are coming from, and under what circumstances they
> will arrive.
<Rat>Not a Problem!</Rat>
OK, I'll consider first Your Program. The main bit generally goes:
Initialise GUI toolkit
Create main window
Call toolkit event loop
Exit
The toolkit event loop consists basically of:
while(app not exited) {
Get next event from event queue
Dispatch event
}
where the process of dispatching an event consists of passing it to the event
handler for the window in which the event occurred. If that doesn't say it has
handled the event, pass the event up the window hierarchy until someone does
handle it.
All of which rather begs the question of how events get on the event queue in
the first place. That's the core window system's job: you'll get an event when
the mouse moves, a mouse button is pressed, a key pressed, a window is
resized/exposed/hidden, and so on. Think an embedded system handling an
interrupt by sticking a message into a mailbox for future processing.
> I haven't found a good introductory text or "reference" manual
> page that describes that. Do you know of any?
Nothing really springs to mind. The O'Reilly 'Java in a Nutshell' book has a
section on developing a simple 'Scribble' app (draw lines in a window with the
mouse) which might give a flavour of what's going on. Or snitch a nearby Windows
coder's copy of Petzold (Charles Petzold - Programming Windows or something like
that), which deals with raw Windows SDK coding. You may have to spend some time
looking, though; a truly scary number of Windows coders know nothing about SDK
level coding and are blissfully ignorant of what goes on inside their toolkit.
> For example: years ago I wrote a nice simulation of "diffusion
> limited aggregation", wich is a fractal thing that ends up
> drawing a pretty picture that looks a bit like coral or dust
> clumps. [...]
> I dug out the source for this the other day, and have translated
> the guts (the bits that determine which pixels should be on, and
> in what colour) into C, but can't figure out how to produce the
> output. You see: the original version used the fact that the
> terminal could eat pixel updates at a certain rate to produce a
> sort-of animation of the growth of the picture. If I do the X
> thing the "obvious" way (well, the way that seemed most obvious
> from the Xlib man pages), I'd just do the whole picture creation
> to an off-screen pixmap, and then issue some sort of "display
> me" command, which might get the image up, but wouldn't be as
> much fun.
The 'display me' command will consist of 'blat this bitmap onto the screen'.
That's the standard technique for displays that take a long time to draw; draw
then on an internal bitmap and just zap that onto the screen when asked to
paint. Generally speaking, you don't want event handling to take a long time,
as the app will appear to freeze.
> If I use the "paint" or whatever event to draw the first pixel,
> what has to happen to make another redraw event happen
> immediately after, to draw the next pixel?
Well, you could invalidate the window (or area of the window). This is saying
'this bit needs to be redrawn'; the windowing system will queue a paint
event. Paint events, in Windows anyway, are low-priority (in fact Windows just
keeps a note of the invalid area of a window and generates them when there's
nothing else to do), so the rest of your app would continue to work, but would
rather hog CPU.
> If I use the "paint" or "expose" or whatever event to draw
> the whole thing, will it actually appear on the screen,
> pixel-by-pixel?
One class of events I omitted above is timer events. For fractal drawing, this
is probably the way to go; have a timer trigger frequently, and on each time
draw a bit more into an offscreen bitmap and invalidate the window.
Handling a situation where you have some big slab of computing to do can be
tricky on a GUI. If the calculate can be broken down into bite-sized chunks,
you can use a periodic timer to trigger doing bits at a time. Otherwise
spawning a calculation thread is on the agenda.
Phew! Worra lorra wurds. HTH.
--
Jim Hague - [EMAIL PROTECTED] (Work), [EMAIL PROTECTED] (Play)
Never trust a computer you can't lift or you don't control.
--
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug