On Sun, Jul 20, 2003 at 11:59:00AM -0400, Dan Sugalski wrote:
> We're supporting interrupts at the interpreter level because we must. 
> It doesn't matter much whether we like them, or we think they're a 
> good idea, the languages we target require them to be there. Perl 5, 
> Perl 6, Python, and Ruby all have support for Unix signals in pretty 
> much the way you'd get them if you were writing C code. (That is to 
> say, broken by design, dangerous to use, and of far less utility than 
> they seem on the surface)

Right, which is why I said in my initial message that dropping
interrupts might be politically impossible.

I still think that including something that is broken by design,
dangerous to use, and of questionable utility isn't a good idea,
but I can accept the argument that it may be necessary.


> >It would be entirely possible for Parrot (or a Parrot library) to
> >use AIO at a low level, without introducing interrupts to the VM layer.
> 
> Sure. But what'd be the point? Adding in interrupts allows a number 
> of high-performance idioms that aren't available without them. They 
> certainly won't be required, and most of the IO you'll see done will 
> be entirely synchronous, since that's what most compilers will be 
> spitting out. You don't *have* to use IO callbacks, you just can if 
> you want to.

Could point me at a reference for these high-performance idioms?
While I've heard of significant gains being realized through AIO,
it was my understanding that this is generally related to disk IO,
where Unix doesn't provide support for non-blocking IO.  The
performance gains come not from a different code flow, but from the
ability to perform disk access in the background.

(I'm not disputing that such idioms exist; if there's a better
way to do things that I don't know of, I want to know more about it!)


> >Regarding AIO being faster: Beware premature optimization.
> 
> I'm going to start carrying a nerf bat around and smack people who 
> trot this one out.

The fact that it is often said does not make it any less true.

You've asserted that Parrot will be faster (in at least some
situations) with interrupt-driven IO than it will be with
non-interrupt-driven IO.  I'm unconvinced of this claim.  In
particular, I feel that support for interrupts will come at an
overall performance penalty, and I am unconvinced that this penalty
will not outweigh any benefits that interrupt-driven IO would bring.

Now, you can ignore me if you want; you're the designer.  Hitting
me isn't going to convince me of anything, however.


> While it's not inappropriate to apply it to design, we're nowhere 
> near that point. This isn't premature optimization, or optimization 
> of any sort--it's design, and it should be done now. This is what 
> we're *supposed* to be doing. It's certainly reasonable to posit that 
> async IO is a bad design choice (won't get you very far, but you can 
> posit it :) but please don't trot out the "premature optimization" 
> quote.

This is *exactly* the time when that quote is appropriate to apply.
When a design decision is made "because it'll be faster that way",
it is always worth examining the question of whether it WILL be
faster or not.  (I am aware that there is a second reason for
supporting interrupts in Parrot--Unix signals; I was addressing the
argument that support for AIO is sufficient reason to include
interrupts.)

For example: If it turns out that Parrot, sans interrupt-driven IO,
is capable of saturating the system bus when writing to a device,
there is little point in optimizing Parrot's IO system.


> You may suspect, but you'd turn out to be incorrect--using threads to 
> simulate a real async IO system still has performance wins. And we're 
> going to be using native async stuff when we can.

Do you know of a program that does this (simulated AIO via threads)?
(Again, I'm not disputing your claim--it's just that this is
completely contrary to my experience, and I'd like to know more
about it.)

                      - Damien

Reply via email to