On Thu, Feb 23, 2023, Yu Zhao wrote:
> On Thu, Feb 23, 2023 at 12:11 PM Sean Christopherson <sea...@google.com> 
> wrote:
> >
> > On Thu, Feb 23, 2023, Yu Zhao wrote:
> > > > As alluded to in patch 1, unless batching the walks even if KVM does 
> > > > _not_ support
> > > > a lockless walk is somehow _worse_ than using the existing 
> > > > mmu_notifier_clear_flush_young(),
> > > > I think batching the calls should be conditional only on 
> > > > LRU_GEN_SPTE_WALK.  Or
> > > > if we want to avoid batching when there are no mmu_notifier listeners, 
> > > > probe
> > > > mmu_notifiers.  But don't call into KVM directly.
> > >
> > > I'm not sure I fully understand. Let's present the problem on the MM
> > > side: assuming KVM supports lockless walks, batching can still be
> > > worse (very unlikely), because GFNs can exhibit no memory locality at
> > > all. So this option allows userspace to disable batching.
> >
> > I'm asking the opposite.  Is there a scenario where batching+lock is worse 
> > than
> > !batching+lock?  If not, then don't make batching depend on lockless walks.
> 
> Yes, absolutely. batching+lock means we take/release mmu_lock for
> every single PTE in the entire VA space -- each small batch contains
> 64 PTEs but the entire batch is the whole KVM.

Who is "we"?  I don't see anything in the kernel that triggers walking the whole
VMA, e.g. lru_gen_look_around() limits the walk to a single PMD.  I feel like 
I'm
missing something...

Reply via email to