On 18 June 2015 at 15:23, Emilio G. Cota <c...@braap.org> wrote: > On Thu, Jun 18, 2015 at 08:42:40 +0100, Peter Maydell wrote: >> > What data structures are you referring to? Are they ppc-specific? >> >> None of the code generation data structures are locked at all -- >> if two threads try to generate code at the same time they'll >> tend to clobber each other. > > AFAICT tb_gen_code is called with a mutex held (the sequence is > mutex->tb_find_fast->tb_find_slow->tb_gen_code in cpu-exec.c) > > The only call to tb_gen_code that in usermode is not holding > the lock is in cpu_breakpoint_insert->breakpoint_invalidate-> > tb_invalidate_phys_page_range->tb_gen_code. I'm not using > gdb so I guess I cannot trigger this. > > Am I missing something?
I'd forgotten we had that mutex. However it's not actually a sufficient fix for the problem. What needs to happen is that: (a) somebody actually sits down and figures out what data structures we have and what locking/per-cpuness/etc they need, ie a design (b) somebody implements that design This is happening as port of the TCG multithreading work: http://wiki.qemu.org/Features/tcg-multithread This is the bug we've had kicking around for a while about multithreading races: https://bugs.launchpad.net/qemu/+bug/1098729 As just one example race, consider the possibility that thread A calls tb_gen_code, which calls tb_alloc, which calls tb_flush, which clears the whole code cache, and then tb_gen_code starts generating code over the top of a TB that thread B was in the middle of executing from... >> On 17 June 2015 at 22:36, Emilio G. Cota <c...@braap.org> wrote: >> > I don't think this is a race because it also breaks when >> > run on a single core (with taskset -c 0). > > As I said, this problem doesn't seem to be a race. The multiple threads will still all be racing with each other on the single core. In general I don't see much benefit in detailed investigation into the precise reason why a guest program crashes when the whole area is known to be fundamentally not designed right... thanks -- PMM