Dan Sugalski <[EMAIL PROTECTED]> wrote:
> At 11:38 AM +0100 1/16/04, Leopold Toetsch wrote:
>>Event handling currently works for all run cores[1] except JIT.

> What I'd planned for with events is a bit less responsive than the
> system you've put together for the non-JIT case, and I think it'll be
> OK generally speaking.

> Ops fall into three categories:

>   1) Those that don't check for events
>   2) Those that explicitly check for events
>   3) Those that implicitly check for events

Yep, that are the cases. I think I have boiled down that scheme to "no
cost for non JIT run cores[1]", that is, in the absence of events there
is no overhead for event checking. Event delivery (which I consider rare
in terms of CPU cycles) takes a bit more instead - but not much.

But the JIT core has to deal with event delivery too. So we have to
decide which JITted ops are 3) - (case 2) the explicit check op is already
available, that's no problem we need hints for 3)

> Ops in the third category are a bit trickier. Anything that sleeps or
> waits should spin on the event queue

Ok, the latter is the simple part - all IO or event related ops. But the
problem remains:

What about the loop[2] of mops.pasm? Only integers in registers running
at one Parrot op per CPU cycle.

> The big thing to ponder is which ops ought go in category three. I
> can see the various invoke ops doing it, but beyond that I'm up in
> the air.

Yes. First: do we guarantee timely event handling in highly optimized
loops like in mops.pasm? Can we uses schemes like my proposal of using
the int3 x86 instruction...

leo

[1] the switched core currently checks after the switch statement, but
    its not simple to optimize that

[2]
  <jit_func+116>:       sub    %edi,%ebx
  <jit_func+118>:       jne    0x81c73a4 <jit_func+116>

Reply via email to