Gordon Henriksen <[EMAIL PROTECTED]> wrote:On Saturday, January 17, 2004, at 12:47 , Leopold Toetsch wrote:Ops that can leave the code segment have to explicitely check for events.
But if the "event dispatch" thread is setting some flag for the target thread to detect, it's going to need to lock (or something similar) to make sure that the value of this flag is visible to other threads. (If not, then the target thread could branch to a new code segment after the dispatch thread thinks it has set this flag, but before it's actually become visible to the target threads.) So that could mean a lock inside of every invoke....
If the target thread handles the event and unpatches the bytecode before
the source thread finishes patching?
Then we have kind of a race ;) But AFAIK it doesn't harm (except it slows down the interpreter a bit) If the target thread hits again a patched instruction it checks the task_queue, finds it empty, patches again the opcode stream with the original code and continues running until the next real event is scheduled.
So the really bad case here will be if the patching thread is working "just ahead" of the running thread--if it has to patch 1000 locations, then we could end up hitting each of those (and checking for an event, unpatching everything...) before the patching thread has finished. So now we are probably quadratic with the size of the segment. (Patching N locations leads to N times where we un-patch all N locations.)
And the target thread could communicate with the event-handler thread with and "volatile *patch_opcode_t". This hasn't to be exact (so no locking), but would allow, that the event-thread stops patching.
As above, it may be ineffective if we don't lock--the other thread may not see the value change. (The "volatile" keyword in Java is magic in terms of inter-thread visibility, but not in C.)
JEff