On 10/23/18 12:50 AM, Emilio G. Cota wrote: > On Sun, Oct 21, 2018 at 14:34:25 +0100, Richard Henderson wrote: >> On 10/19/18 2:06 AM, Emilio G. Cota wrote: >>> @@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu, >>> */ >>> atomic_mb_set(&cpu->icount_decr.u16.high, 0); >>> >>> - if (unlikely(atomic_read(&cpu->interrupt_request))) { >>> + if (unlikely(cpu_interrupt_request(cpu))) { >>> int interrupt_request; >>> qemu_mutex_lock_iothread(); >>> - interrupt_request = cpu->interrupt_request; >>> + interrupt_request = cpu_interrupt_request(cpu); >>> if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) { >>> /* Mask out external interrupts for this step. */ >>> interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK; >>> } >>> if (interrupt_request & CPU_INTERRUPT_DEBUG) { >>> - cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG; >>> + cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG); >>> cpu->exception_index = EXCP_DEBUG; >>> qemu_mutex_unlock_iothread(); >>> return true; >> >> Multiple calls. > > I'd rather keep it as is. > > The first read takes the lock, and that has to stay unless > we want to use atomic_set on interrupt_request everywhere.
Why not? That's even cheaper. > Given that the CPU lock is uncontended (so it's cheap to > acquire) ... It still requires at minimum a "lock xchg" (or equivalent on non-x86), which isn't free -- think 50-ish cycles minimum just for that one insn, plus call overhead. r~