On 2023-05-23 07:19:43, Sean Christopherson wrote:
> On Tue, May 23, 2023, Kautuk Consul wrote:
> > On 2022-07-06 16:20:10, Chao Peng wrote:
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index e9153b54e2a4..c262ebb168a7 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -765,10 +765,10 @@ struct kvm {
> > >
> > > #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
> > > struct mmu_notifier mmu_notifier;
> > > - unsigned long mmu_notifier_seq;
> > > - long mmu_notifier_count;
> > > - gfn_t mmu_notifier_range_start;
> > > - gfn_t mmu_notifier_range_end;
> > > + unsigned long mmu_updating_seq;
> > > + long mmu_updating_count;
> >
> > Can we convert mmu_updating_seq and mmu_updating_count to atomic_t ?
>
> Heh, can we? Yes. Should we? No.
>
> > I see that not all accesses to these are under the kvm->mmu_lock
> > spinlock.
>
> Ya, working as intended. Ignoring gfn_to_pfn_cache for the moment, all
> accesses
> to mmu_invalidate_in_progress (was mmu_notifier_count / mmu_updating_count
> above)
> are done under mmu_lock. And for for mmu_notifier_seq (mmu_updating_seq
> above),
> all writes and some reads are done under mmu_lock. The only reads that are
> done
> outside of mmu_lock are the initial snapshots of the sequence number.
>
> gfn_to_pfn_cache uses a different locking scheme, the comments in
> mmu_notifier_retry_cache() do a good job explaining the ordering.
>
> > This will also remove the need for putting separate smp_wmb() and
> > smp_rmb() memory barriers while accessing these structure members.
>
> No, the memory barriers aren't there to provide any kind of atomicity. The
> barriers
> exist to ensure that stores and loads to/from the sequence and invalidate
> in-progress
> counts are ordered relative to the invalidation (stores to counts) and
> creation (loads)
> of SPTEs. Making the counts atomic changes nothing because atomic operations
> don't
> guarantee the necessary ordering.
I'm not saying that the memory barriers provide atomicity.
My comment was based on the assumption that "all atomic operations are
implicit memory barriers". If that assumption is true then we won't need
the memory barriers here if we use atomic operations for protecting
these 2 structure members.
>
> E.g. when handling a page fault, KVM snapshots the sequence outside of
> mmu_lock
> _before_ touching any state that is involved in resolving the host pfn, e.g.
> primary
> MMU state (VMAs, host page tables, etc.). After the page fault task acquires
> mmu_lock, KVM checks that there are no in-progress invalidations and that the
> sequence
> count is the same. This ensures that if there is a concurrent page fault and
> invalidation event, the page fault task will either acquire mmu_lock and
> create SPTEs
> _before_ the invalidation is processed, or the page fault task will observe
> either an
> elevated mmu_invalidate_in_progress or a different sequence count, and thus
> retry the
> page fault, if the page fault task acquires mmu_lock after the invalidation
> event.