Sergey Fedorov <serge.f...@gmail.com> writes:

> On 27/05/16 18:25, Paolo Bonzini wrote:
>>
>> On 27/05/2016 17:07, Sergey Fedorov wrote:
>>>>>>>  1. Make 'cpu->thread_kicked' access atomic
>>>>>>>  2. Remove global 'exit_request' and use per-CPU 'exit_request'
>>>>>>>  3. Change how 'current_cpu' is set
>>>>>>>  4. Reorganize round-robin CPU TCG thread function
>>>>>>>  5. Enable 'mmap_lock' for system mode emulation (do we really want 
>>>>>>> this?)
>>>>> No, I don't think so.
>>>>>
>>>>>>>  6. Enable 'tb_lock' for system mode emulation
>>>>>>>  7. Introduce per-CPU TCG thread function
>>>>> At least 2/3/7 must be done at the same time, but I agree that this
>>>>> patch could use some splitting. :)
>>> Hmm, 2/3 do also change single-threaded CPU loop. I think they should
>>> apply separately from 7.
>> Reviewed the patch now, and I'm not sure how you can do 2/3 for the
>> single-threaded CPU loop.  They could be moved out of cpu_exec and into
>> cpus.c (in a separate patch), but you need exit_request and
>> tcg_current_cpu to properly kick the single-threaded CPU loop out of
>> qemu_tcg_cpu_thread_fn.
>
> Summarizing Paolo and my chat on IRC, we want run_on_cpu() to be served
> as soon as possible so that it would not block IO thread for too long.
> Removing global 'exit_request' would mean that a run_on_cpu() request
> from IO thread wouldn't be served until single-threaded CPU loop
> schedules the target CPU. This doesn't seem to be acceptable.

So I've fixed this by keeping a tcg_current_rr_cpu for the benefit of
the round robin scheduling (I needed something similar for the kick
timer once the globals had gone).

I'm seeing if I can pull out the exit_request stuff and other
re-factorings in the big patch to reduce the size of this monster a
little.

> NB: Calling run_on_cpu() for other CPU from the CPU thread would cause a
> deadlock in single-threaded round-robin CPU loop.

We've established this doesn't happen as qemu_cpu_is_self compares
threads which are all the same for RR TCG.

>
> Thanks,
> Sergey


--
Alex Bennée

Reply via email to