On Sat, 28 Sept 2024 at 22:01, Daniel Henrique Barboza
<dbarb...@ventanamicro.com> wrote:
>
>
>
> On 9/28/24 5:22 PM, Peter Maydell wrote:
> > On Tue, 24 Sept 2024 at 23:19, Alistair Francis <alistai...@gmail.com> 
> > wrote:

> >> +/* Register helper functions */
> >> +static inline uint32_t riscv_iommu_reg_mod32(RISCVIOMMUState *s,
> >> +    unsigned idx, uint32_t set, uint32_t clr)
> >> +{
> >> +    uint32_t val;
> >> +    qemu_spin_lock(&s->regs_lock);
> >> +    val = ldl_le_p(s->regs_rw + idx);
> >> +    stl_le_p(s->regs_rw + idx, (val & ~clr) | set);
> >> +    qemu_spin_unlock(&s->regs_lock);
> >> +    return val;
> >> +}
> >
> > This looks very weird. Nobody else's IOMMU implementation
> > grabs a spinlock while it is accessing guest register data.
> > Why is riscv special? Why a spinlock? (We use spinlocks
> > only very very sparingly in general.)
>
>
> The first versions of the IOMMU used qemu threads. I believe this is where
> the locks come from (both from registers and from the cache).
>
> I'm not sure if we're ever going to hit a race condition without the locks
> in the current code (i.e. using mmio ops only). I think I'll make an attempt
> to remove the locks and see if something breaks.

Generally access to the backing for guest registers in a
device model is protected by the fact that the iothread lock
(BQL) is held while the guest is accessing a device. I think for
iommu devices there may be some extra complication because they
get called as part of the translate-this-address codepath
where I think the BQL is not held, and so they typically
have some qemu_mutex to protect themselves[*]. But they
don't generally need/want to do the handling at this very
fine-grained per-read/write level.

[*] This is just what I've gathered from a quick read through
our other iommu implementations; don't take it as gospel.

If you do need to do something complicated with locking it would
be good to have a comment somewhere explaining why and what
the locking principles are.

thanks
-- PMM

Reply via email to