On 12/08/2015 11:58, Paolo Bonzini wrote:
On 11/08/2015 23:34, Frederic Konrad wrote:
Also if qemu_cond_broadcast(&qemu_io_proceeded_cond) is being dropped
there is no point keeping the guff around in qemu_tcg_wait_io_event.
Yes good point.
BTW this leads to high consumption of host CPU eg:
On 11/08/2015 23:34, Frederic Konrad wrote:
>>>
>> Also if qemu_cond_broadcast(&qemu_io_proceeded_cond) is being dropped
>> there is no point keeping the guff around in qemu_tcg_wait_io_event.
>>
> Yes good point.
>
> BTW this leads to high consumption of host CPU eg: 100% per VCPU thread as
> t
On 11/08/2015 22:12, Alex Bennée wrote:
Paolo Bonzini writes:
On 10/08/2015 17:27, fred.kon...@greensocs.com wrote:
void qemu_mutex_lock_iothread(void)
{
-atomic_inc(&iothread_requesting_mutex);
-/* In the simple case there is no need to bump the VCPU thread out of
- * TCG cod
Paolo Bonzini writes:
> On 10/08/2015 17:27, fred.kon...@greensocs.com wrote:
>> void qemu_mutex_lock_iothread(void)
>> {
>> -atomic_inc(&iothread_requesting_mutex);
>> -/* In the simple case there is no need to bump the VCPU thread out of
>> - * TCG code execution.
>> - */
>>
On 10/08/2015 18:15, Paolo Bonzini wrote:
On 10/08/2015 17:27, fred.kon...@greensocs.com wrote:
void qemu_mutex_lock_iothread(void)
{
-atomic_inc(&iothread_requesting_mutex);
-/* In the simple case there is no need to bump the VCPU thread out of
- * TCG code execution.
- */
On 10/08/2015 17:27, fred.kon...@greensocs.com wrote:
> void qemu_mutex_lock_iothread(void)
> {
> -atomic_inc(&iothread_requesting_mutex);
> -/* In the simple case there is no need to bump the VCPU thread out of
> - * TCG code execution.
> - */
> -if (!tcg_enabled() || qemu_
From: KONRAD Frederic
This finally allows TCG to benefit from the iothread introduction: Drop
the global mutex while running pure TCG CPU code. Reacquire the lock
when entering MMIO or PIO emulation, or when leaving the TCG loop.
We have to revert a few optimization for the current TCG threading