Mr. Bolz,

   Hello, I’m a master student in Japan and this is the second time that I send 
a mail to you. :)
   Recently I’m implementing an Erlang interpreter by RPython, and I just have 
added a scheduler to my interpreter to simulate the multi-process in 
single-core. I compared two version of my interpreter, one has a scheduler, and 
one doesn’t, and I’m very surprised to find that there was only a very little 
overhead for the scheduler mechanism, in my benchmark it was only 3%. 
   In my implementation, the scheduler has a run able queue, whose element is a 
tuple of an object which has a function for dispatch loop, a program counter 
and a reference to the Erlang byte code. While doing scheduling, the scheduler 
just dequeue an element from the run able queue, call the function for dispatch 
loop, the loop which only run for a limited time, and the scheduler will 
enqueue the element (tuple of object with dispatch loop, program counter and 
reference to the Erlang byte code) to the run able queue again.
   So in my opinion, it may be a trouble for JIT’s work because the dispatch 
loop is not always continuing, from the view of scheduler, the dispatch loop 
will run only a limited time of iterations, then be suspended, and resumed, and 
suspended again and so on. I think it may cause the JIT hard to do profiling, 
and I have no idea if the complied native code from JIT can be reuse when the 
dispatch loop resumed, either. From the benchmark I run I guess there may be 
some special cares taken to hold this situation, (actually I have also compared 
the JIT log generated for the two versions of interpreter below, it seemed 
quite similar at most of cases) so I’m just curious about how the JIT actually 
do under a scheduler? How does JIT overcome the trouble in this discontinuous 
environment?

Best Regards,
Ruochen Huang
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to