Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Am 26.11.2015 um 12:25 schrieb Stefan Weil: Am 26.11.2015 um 10:12 schrieb David Engraf: Am 25.11.2015 um 17:16 schrieb Paolo Bonzini: On 25/11/2015 16:48, David Engraf wrote: Indeed, TLS handling is broken. The address of iothread_locked is always the same between threads and I can see that a different thread sets iothread_locked to false, thus my current thread uses an invalid state. I will have to check why my compiler produces invalid TLS code. That rings a bell, I think there are different CRT libraries or something like that. Stefan, what would break TLS under Windows? "--extra-cflags=-mthreads" is the solution. iothread_locked is unique for each thread now. I saw that this is already mentioned in the Documentation [1], but why isn't that enabled by default if QEMU requires this option on Windows? Without this option, even the latest version of gcc doesn't produce working TLS code. Many thanks for your support! David [1] http://wiki.qemu.org/Hosts/W32 Hi David, I just prepared a patch which will enable that option by default for all compilations targeting Windows. Thanks for your report! Thank you. David
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Am 26.11.2015 um 10:12 schrieb David Engraf: > Am 25.11.2015 um 17:16 schrieb Paolo Bonzini: >> >> >> On 25/11/2015 16:48, David Engraf wrote: >>> >>> Indeed, TLS handling is broken. The address of iothread_locked is >>> always >>> the same between threads and I can see that a different thread sets >>> iothread_locked to false, thus my current thread uses an invalid state. >>> I will have to check why my compiler produces invalid TLS code. >> >> That rings a bell, I think there are different CRT libraries or >> something like that. Stefan, what would break TLS under Windows? > > "--extra-cflags=-mthreads" is the solution. iothread_locked is unique > for each thread now. I saw that this is already mentioned in the > Documentation [1], but why isn't that enabled by default if QEMU > requires this option on Windows? Without this option, even the latest > version of gcc doesn't produce working TLS code. > > Many thanks for your support! > > David > > [1] http://wiki.qemu.org/Hosts/W32 Hi David, I just prepared a patch which will enable that option by default for all compilations targeting Windows. Thanks for your report! Regards, Stefan
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Am 25.11.2015 um 17:16 schrieb Paolo Bonzini: On 25/11/2015 16:48, David Engraf wrote: Indeed, TLS handling is broken. The address of iothread_locked is always the same between threads and I can see that a different thread sets iothread_locked to false, thus my current thread uses an invalid state. I will have to check why my compiler produces invalid TLS code. That rings a bell, I think there are different CRT libraries or something like that. Stefan, what would break TLS under Windows? "--extra-cflags=-mthreads" is the solution. iothread_locked is unique for each thread now. I saw that this is already mentioned in the Documentation [1], but why isn't that enabled by default if QEMU requires this option on Windows? Without this option, even the latest version of gcc doesn't produce working TLS code. Many thanks for your support! David [1] http://wiki.qemu.org/Hosts/W32
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
On 25/11/2015 16:48, David Engraf wrote: > > Indeed, TLS handling is broken. The address of iothread_locked is always > the same between threads and I can see that a different thread sets > iothread_locked to false, thus my current thread uses an invalid state. > I will have to check why my compiler produces invalid TLS code. That rings a bell, I think there are different CRT libraries or something like that. Stefan, what would break TLS under Windows? Paolo
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Am 25.11.2015 um 15:36 schrieb Paolo Bonzini: On 25/11/2015 15:04, David Engraf wrote: No, you don't. Who is reading iothread_locked during qemu_cond_wait_iothread? No one, because it is a thread-local variable whose address is never taken. prepare_mmio_access is reading iothread_locked by using qemu_mutex_iothread_locked after qemu_tcg_wait_io_event calls qemu_cond_wait. All one the same thread. Sure, but who has set iothread_locked to false during the execution of qemu_cond_wait? No one, because it's a thread-local variable. If it's true before qemu_cond_wait, it will be true after qemu_cond_wait and you don't need qemu_cond_wait_iothread... unless your compiler is broken and doesn't generate TLS properly. Indeed, TLS handling is broken. The address of iothread_locked is always the same between threads and I can see that a different thread sets iothread_locked to false, thus my current thread uses an invalid state. I will have to check why my compiler produces invalid TLS code. David Can you compile cpus.c with -S and attach it? Paolo qemu_tcg_cpu_thread_fn -> qemu_tcg_wait_io_event -> qemu_cond_wait acquires the mutex qemu_tcg_cpu_thread_fn -> tcg_exec_all -> tcg_cpu_exec -> cpu_exec -> cpu_exec ends up in calling prepare_mmio_access
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
On 25/11/2015 15:04, David Engraf wrote: >>> >> >> No, you don't. Who is reading iothread_locked during >> qemu_cond_wait_iothread? No one, because it is a thread-local variable >> whose address is never taken. > > prepare_mmio_access is reading iothread_locked by using > qemu_mutex_iothread_locked after qemu_tcg_wait_io_event calls > qemu_cond_wait. All one the same thread. Sure, but who has set iothread_locked to false during the execution of qemu_cond_wait? No one, because it's a thread-local variable. If it's true before qemu_cond_wait, it will be true after qemu_cond_wait and you don't need qemu_cond_wait_iothread... unless your compiler is broken and doesn't generate TLS properly. Can you compile cpus.c with -S and attach it? Paolo > qemu_tcg_cpu_thread_fn > -> qemu_tcg_wait_io_event >-> qemu_cond_wait acquires the mutex > > qemu_tcg_cpu_thread_fn > -> tcg_exec_all -> tcg_cpu_exec -> cpu_exec > -> cpu_exec ends up in calling prepare_mmio_access
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Am 25.11.2015 um 14:26 schrieb Paolo Bonzini: On 25/11/2015 13:31, David Engraf wrote: Hi Paolo, please check the new version. I removed changing the iothread_locked variable. But I still need to set the correct value of iothread_locked when using qemu_cond_wait. No, you don't. Who is reading iothread_locked during qemu_cond_wait_iothread? No one, because it is a thread-local variable whose address is never taken. prepare_mmio_access is reading iothread_locked by using qemu_mutex_iothread_locked after qemu_tcg_wait_io_event calls qemu_cond_wait. All one the same thread. qemu_tcg_cpu_thread_fn -> qemu_tcg_wait_io_event -> qemu_cond_wait acquires the mutex qemu_tcg_cpu_thread_fn -> tcg_exec_all -> tcg_cpu_exec -> cpu_exec -> cpu_exec ends up in calling prepare_mmio_access David
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
On 25/11/2015 13:31, David Engraf wrote: > Hi Paolo, > > please check the new version. I removed changing the iothread_locked > variable. But I still need to set the correct value of iothread_locked > when using qemu_cond_wait. No, you don't. Who is reading iothread_locked during qemu_cond_wait_iothread? No one, because it is a thread-local variable whose address is never taken. Paolo > +static void qemu_cond_wait_iothread(QemuCond *cond) > +{ > +iothread_locked = false; > +qemu_cond_wait(cond, &qemu_global_mutex); > +iothread_locked = true; > +}
Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Hi Paolo, please check the new version. I removed changing the iothread_locked variable. But I still need to set the correct value of iothread_locked when using qemu_cond_wait. This will fix my race condition on Windows when prepare_mmio_access is called and checks if the lock is already held by the current thread. David Commit afbe70535ff1a8a7a32910cc15ebecc0ba92e7da introduced the function qemu_mutex_iothread_locked to avoid recursive locking of the iothread lock. But the function qemu_cond_wait directly acquires the lock without modifying iothread_locked. This leads to condition where qemu_tcg_wait_io_event acquires the mutex by using qemu_cond_wait and later calls tcg_exec_all > tcg_cpu_exec -> cpu_exec > prepare_mmio_access checking if the current thread already holds the lock. This is done by checking the variable iothread_locked which is still 0, thus the function tries to acquire the mutex again. The patch adds a function called qemu_cond_wait_iothread to keep iothread_locked in sync. Signed-off-by: David Engraf --- cpus.c | 27 ++- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/cpus.c b/cpus.c index 877bd70..e4d2d4b 100644 --- a/cpus.c +++ b/cpus.c @@ -70,6 +70,8 @@ static CPUState *next_cpu; int64_t max_delay; int64_t max_advance; +static void qemu_cond_wait_iothread(QemuCond *cond); + /* vcpu throttling controls */ static QEMUTimer *throttle_timer; static unsigned int throttle_percentage; @@ -920,7 +922,7 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data) while (!atomic_mb_read(&wi.done)) { CPUState *self_cpu = current_cpu; -qemu_cond_wait(&qemu_work_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(&qemu_work_cond); current_cpu = self_cpu; } } @@ -998,11 +1000,11 @@ static void qemu_tcg_wait_io_event(CPUState *cpu) /* Start accounting real time to the virtual clock if the CPUs are idle. */ qemu_clock_warp(QEMU_CLOCK_VIRTUAL); -qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(cpu->halt_cond); } while (iothread_requesting_mutex) { -qemu_cond_wait(&qemu_io_proceeded_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(&qemu_io_proceeded_cond); } CPU_FOREACH(cpu) { @@ -1013,7 +1015,7 @@ static void qemu_tcg_wait_io_event(CPUState *cpu) static void qemu_kvm_wait_io_event(CPUState *cpu) { while (cpu_thread_is_idle(cpu)) { -qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(cpu->halt_cond); } qemu_kvm_eat_signals(cpu); @@ -1123,7 +1125,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) /* wait for initial kick-off after machine start */ while (first_cpu->stopped) { -qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(first_cpu->halt_cond); /* process any pending work */ CPU_FOREACH(cpu) { @@ -1215,6 +1217,13 @@ bool qemu_mutex_iothread_locked(void) return iothread_locked; } +static void qemu_cond_wait_iothread(QemuCond *cond) +{ +iothread_locked = false; +qemu_cond_wait(cond, &qemu_global_mutex); +iothread_locked = true; +} + void qemu_mutex_lock_iothread(void) { atomic_inc(&iothread_requesting_mutex); @@ -1277,7 +1286,7 @@ void pause_all_vcpus(void) } while (!all_vcpus_paused()) { -qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(&qemu_pause_cond); CPU_FOREACH(cpu) { qemu_cpu_kick(cpu); } @@ -1326,7 +1335,7 @@ static void qemu_tcg_init_vcpu(CPUState *cpu) cpu->hThread = qemu_thread_get_handle(cpu->thread); #endif while (!cpu->created) { -qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(&qemu_cpu_cond); } tcg_cpu_thread = cpu->thread; } else { @@ -1347,7 +1356,7 @@ static void qemu_kvm_start_vcpu(CPUState *cpu) qemu_thread_create(cpu->thread, thread_name, qemu_kvm_cpu_thread_fn, cpu, QEMU_THREAD_JOINABLE); while (!cpu->created) { -qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(&qemu_cpu_cond); } } @@ -1363,7 +1372,7 @@ static void qemu_dummy_start_vcpu(CPUState *cpu) qemu_thread_create(cpu->thread, thread_name, qemu_dummy_cpu_thread_fn, cpu, QEMU_THREAD_JOINABLE); while (!cpu->created) { -qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex); +qemu_cond_wait_iothread(&qemu_cpu_cond); } } -- 1.9.1 Am 25.11.2015 um 10:08 schrieb David Engraf: Hi Paolo, Am 24.11.2015 um 16:54 schrieb Paolo Bonzini: On 24/11/2015 16:43, David Engraf wrote: Commit afbe70535ff1a8a7a32910cc15ebecc0ba92e7da introduced the function qemu_mutex_iothread_locked to avoid recursive locking of the iothread lock. Th