I've committed a first attempt towards event handling. Caused by the
absence of further design docs it might be not a scheme, that will
finally be run, but I hope that at least parts are useful.
Its a bit tested w. dynoplibs/alarm.pasm
$ parrot dynoplibs/alarm.pasm
0
1
alarm
Please s. docs/dev
On Fri, Jul 18, 2003 at 05:06:09PM +, Leopold Toetsch wrote:
# New Ticket Created by Leopold Toetsch
# Please include the string: [perl #23039]
# in the subject line of all future correspondence about this issue.
# URL: http://rt.perl.org/rt2/Ticket/Display.html?id=23039
RT says that
: scheduling an event puts the event handler opcode
in place. If there are no events in the quueue, there are no event
check opcodes executed that could slow down the operation.
When there are multiple events to handle, it doesn't save anything to do
these later, so the event handling code puts
On Thu, Jul 17, 2003 at 08:40:44PM -0400, Benjamin Goldberg wrote:
Actually, I'm thinking of something like the following... suppose the
original code is like:
label_foo:
loop body
branch_address:
branch label_foo
Add in the following:
e_handler_foo:
.local
Nicholas Clark wrote:
On Thu, Jul 17, 2003 at 08:40:44PM -0400, Benjamin Goldberg wrote:
Actually, I'm thinking of something like the following... suppose the
original code is like:
label_foo:
loop body
branch_address:
branch label_foo
Add in the following:
e_handler_foo:
.local
Benjamin Goldberg [EMAIL PROTECTED] wrote:
Leopold Toetsch wrote:
OK here it is.
Again the description for the record:
1) Initialization:
- normal core: build op_func_table with all opcode #4 [1]
- CG core: build ops_addr[] filled with this opcode
- prederef cores: build a list
Leopold Toetsch [EMAIL PROTECTED] wrote:
[ event checking without runloop penalty ]
3) So when the next instruction (normal, CG core) or the branch
instruction (prederefed cores) gets executed, first the op_func_table
or the patched instructions are restored and then the event handler
are due.
[1] This opcode (check_event__) calls the actual event handling code
and returns the same address, i.e. doesn't advance the PC.
[2] We could do the same here, but this needs cache sync for ARM and PPC,
which may or may not be allowed in signal code.
Still needs some cleanup ...
leo
any
runtime penalty for an extra check if events are due.
[1] This opcode (check_event__) calls the actual event handling code
and returns the same address, i.e. doesn't advance the PC.
[2] We could do the same here, but this needs cache sync for ARM and
PPC, which may or may not be allowed
Leopold Toetsch [EMAIL PROTECTED] wrote:
... Switching the whole op_func_table() or
ops_addr[] (for CG cores) is simpler,
If have it running now for the slow and the computed goto core.
The signal handler (interrupt code) switches the op_func_table (ops_addr)
and returns.
Then the next executed
On Thu, 17 Jul 2003, Leopold Toetsch wrote:
PC = ((op_func_t*) (*PC)) (PC, INTERP); // prederef functions
To be able to switch function tables, this then should become:
PC = ((op_func_t*) (func_table + *PC)) (PC, INTERP);
Thus predereferncing the function pointer would place an offset
Sean O'Rourke wrote:
To be able to switch function tables, this then should become:
PC = ((op_func_t*) (func_table + *PC)) (PC, INTERP);
Or is there a better way to do it?
Replacing the next instruction with a branch to the signal handler
(like adding a breakpoint) out of the question?
I
On Thu, 17 Jul 2003, Leopold Toetsch wrote:
Replacing the next instruction with a branch to the signal handler
(like adding a breakpoint) out of the question?
I don't know, how to get the address of the next instruction i.e. the
PC above. Going this way would either mean:
- fill the
Sean O'Rourke wrote:
On Thu, 17 Jul 2003, Leopold Toetsch wrote:
Replacing the next instruction with a branch to the signal handler
(like adding a breakpoint) out of the question?
I don't know, how to get the address of the next instruction i.e. the
PC above.
Thinking more of this: There is no
On Thu, Jul 17, 2003 at 09:52:35PM +0200, Leopold Toetsch wrote:
Remaining is:
1) save fill the byte_code with event handler ops.
2) use address relative to op_func_table
3) Or the official way: do regular checks.
I estimate 2) to be best for prederefed code.
I'm not sure about this. With
Gregor N. Purdy [EMAIL PROTECTED] wrote:
#define DO_OP(PC,INTERP) \
(PC = ((INTERP-op_func_table)[*PC])(PC,INTERP))
The easiest way to intercept this flow with minimal cost is to
have the mechanism that wants to take over replace the interpreter's
op_func_table with a block of pointers
The plan is to follow Gregors idea to swap op_func_table/op_addr array,
if events got enqueued.
The internal check_events__ will get filled into a second op_func_table
which will then take over.
There are still some questions:
- How does the actual event structure look like?
- Is this a
On Tue, Jul 15, 2003 at 10:15:57AM +0200, Leopold Toetsch wrote:
How is the described scheme supposed to work with JIT generated code ?
--
Jason
Jason Gloudon [EMAIL PROTECTED] wrote:
On Tue, Jul 15, 2003 at 10:15:57AM +0200, Leopold Toetsch wrote:
How is the described scheme supposed to work with JIT generated code ?
JIT code would be intersparsed with (JITted) CHECK_EVENTS() opcodes.
They would get emitted e.g. at backward branches
Gregor N. Purdy [EMAIL PROTECTED] wrote:
Benjamin --
#define DO_OP(PC,INTERP) \
(PC = ((INTERP-op_func_table)[*PC])(PC,INTERP))
The easiest way to intercept this flow with minimal cost is to
have the mechanism that wants to take over replace the interpreter's
op_func_table with a block
Leopold Toetsch wrote:
[snip]
- When will we check, it there are events in the event queue?
If we check too often (between each two ops), it will slow things down.
If we don't check often enough, the code might manage to avoid checking
for events entirely.
I would suggest that every flow
Benjamin --
The trick is to find the cheapest possible way to get conditional
processing to occur if and only if there are events in the event
queue.
I'll only be considering the fast core here for simplicity. But,
if you look at include/parrot/interp_guts.h, the only thing of
interest there is
22 matches
Mail list logo