>I understand what you mean, but poll() loops, despite the misleading
>name of the central primitive, are not doing any polling at all. poll()
>works by notification, not by polling (... at least I hope that kernel
>developers have the good sense to make it so).

The underlying OS details are, hopefully, not actual polling, but the
paradigm presented to the application program is still effectively
a poll, just as select is: the paradigm is mother-may-I followed
by an attempt to actually _do_ what is desired, repeat until done.
I.e., a poll.

The non-blockingness of the actual attempt is in no way guaranteed
in spite of the implication of the success of poll/select.  Also,
because it is effectively a poll the application tends to be written
in a non-event-driven fashion, which can make a real mess in a more
complex application.  (Not inherent, yet it seems to happen a lot.
Same as pointer abuse in C driving successor languages not to have
them at all.  If you don't provide a poll-oriented API then it's
MUCH harder to write a strung-out polling application.)

>If you like DNIX's completion queues, there is
>no reason you should dislike poll() loops, because it's the same
paradigm;

There we disagree, because the queued blocking operations are all
blocking in parallel!  With poll/select the blocking is all in
poll/select.  (Unless it isn't, oops.)  Oh, and you have to manage
the global poll/select timeout value yourself depending on everything
that is going on in your program, rather than queuing up autonomous
asynchronous timers, as many as needed, as just additional
asynchronous blocking operations.  Ugh.

>the DNIX API probably just provides you with nice syntactic sugar to
hide
>the gory details of adding and removing events and/or timers.

In a sense, yet that sugar (or salt, depending on your predilection
for event-driven programming) is what encourages the writing of a
paralleled application that doesn't have weird sticking points where
it goes non-responsive.  (And haven't we _all_ seen those!)

>Other AIO paradigms mess with the program control flow. I personally
>like having maximum control over the control flow, so I'm reluctant to
>use such paradigms, but it's purely a question of taste.

I can see this, and I offer up an anecdote.  While developing this
system (for bank teller applications, in fact) we had two main GUI
applications, with two sets of authors.  One was a forms designer,
and one was the runtime data collector.  Both authors began by writing
their applications in the traditional way, with a sequential thread
of what to do next.  They were both having problems, naturally, making
a GUI application remain totally responsive to the user.  I was able to
convince the forms author to convert to an event-driven paradigm, using
AIO.  It took him a week or so to convert his rudimentary app.  I never
heard from him again, basically.  (I was the main GUI author, and was
always called in whenever anything went wrong.)  Occasionally he
thanked me for 'making' him convert, as he loaded up more and more
features onto the application and it Just Worked.  The other author(s)
were militant in their preservation of their traditional BSD-derived
select-style application, and claimed that they "didn't have time"
to convert.  For YEARS thereafter this application continued to
have weird responsiveness problems and other operational bugs, all
tied directly to the non-event-driven architecture of their program.

I even got a free trip to Rochester NY, with the other authors, just
to observe that a persistent problem the customer was having was due
to the operator occasionally hitting an unexpected key in one particular
place in the program.  (That particular event-get in the strung-out
code didn't know how to properly handle that particular key and so
ended up inadvertently recursing on the main application loop, resulting
in stack burn and eventual death once enough of those keys had been
hit during the day.)  Such are the perils of keeping a complex program's
state in the program counter rather than in a state table.

(very) pseudo-code of the good main application loop:

    while (read(tqfd, &event, sizeof event) == sizeof event)
        switch (event) {
                case keystroke: ...
                case menu_item: ...
                case mouse_click: ...
                case window_refresh: ...
                case timer_exp: ...
                case read_done: ...
                case write_done: ...
                case connection_made: ...
                case connection_died: ...
                case child_exit: ...
                default: ...
      }
    exit(1);

This kind of application is ALWAYS ready to respond to the user, or
any other high-priority event, when combined with exclusively AIO,
never SIO.

The 'sugar', such as it is, wasn't what I normally consider sugar at all
but rather the notion that you asked servers to do stuff asynchronously,
and once they replied that they were done you moved on to do the next
thing with them.  As opposed to asking whether or not you can do
something,
going on to do it if possible, OR ELSE trying to find something else to
do AT THAT POINT when it wasn't ready to do what you wanted most to do
at that moment.

It makes all the difference in the world.  And poll/select encourages
the wrong style of coding for this kind of server application.  And,
as we've already discussed, there's no guarantee that the poll/select
paradigm will NOT block at odd points even when you don't expect it to.

That's the real sin here, and the point at which I say stop putting
lipstick on that particular pig.

-- Jim




_______________________________________________
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to