On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 31/03/21 23:05, Sean Christopherson wrote:
> > > Wouldn't it be incorrect to lock a mutex (e.g. inside*another* MMU
> > > notifier's invalidate callback) while holding an rwlock_t? That makes
> > > sense
> > > because anybody that's busy waiting in
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 31/03/21 18:41, Sean Christopherson wrote:
> > > That said, the easiest way to avoid this would be to always update
> > > mmu_notifier_count.
> > Updating mmu_notifier_count requires taking mmu_lock, which would defeat the
> > purpose of these shenani
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> > + /*
> > +* Reset the lock used to prevent memslot updates between MMU notifier
> > +* range_start and range_end. At this point no more MMU notifiers will
> > +* run, but the lock could sti
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> Also, related to the first part of the series, perhaps you could structure
> the series in a slightly different way:
>
> 1) introduce the HVA walking API in common code, complete with on_lock and
> patch 1
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 31/03/21 21:47, Sean Christopherson wrote:
> > Rereading things, a small chunk of the rwsem nastiness can go away. I
> > don't see
> > any reason to use rw_semaphore instead of rwlock_t.
>
> Wouldn't it be incorrect to lock a mutex (e.g. inside *an
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> On 26/03/21 03:19, Sean Christopherson wrote:
> > + /*
> > +* Reset the lock used to prevent memslot updates between MMU notifier
> > +* range_start and range_end. At this point no more MMU notifiers will
> > +* run, but the lock could sti
On Wed, Mar 31, 2021, Sean Christopherson wrote:
> On Wed, Mar 31, 2021, Paolo Bonzini wrote:
> > On 31/03/21 21:47, Sean Christopherson wrote:
> > I also thought of busy waiting on down_read_trylock if the MMU notifier
> > cannot block, but that would also be invalid for the opposite reason (the
>
On 31/03/21 23:22, Sean Christopherson wrote:
On a related topic, any preference on whether to have an explicit "must_lock"
flag (what I posted), or derive the logic based on other params?
The helper I posted does:
if (range->must_lock &&
kvm_mmu_lock_and_check_handler(kvm,
On 31/03/21 23:05, Sean Christopherson wrote:
Wouldn't it be incorrect to lock a mutex (e.g. inside*another* MMU
notifier's invalidate callback) while holding an rwlock_t? That makes sense
because anybody that's busy waiting in write_lock potentially cannot be
preempted until the other task get
On 31/03/21 22:52, Sean Christopherson wrote:
100% agree with introducing on_lock separately from the conditional locking.
Not so sure about introducing conditional locking and then converting non-x86
archs. I'd prefer to keep the conditional locking after arch conversion.
If something does go
On 31/03/21 21:47, Sean Christopherson wrote:
Rereading things, a small chunk of the rwsem nastiness can go away. I don't see
any reason to use rw_semaphore instead of rwlock_t.
Wouldn't it be incorrect to lock a mutex (e.g. inside *another* MMU
notifier's invalidate callback) while holding a
On 31/03/21 22:15, Sean Christopherson wrote:
On Wed, Mar 31, 2021, Paolo Bonzini wrote:
On 26/03/21 03:19, Sean Christopherson wrote:
+ /*
+* Reset the lock used to prevent memslot updates between MMU notifier
+* range_start and range_end. At this point no more MMU notif
On 31/03/21 18:41, Sean Christopherson wrote:
That said, the easiest way to avoid this would be to always update
mmu_notifier_count.
Updating mmu_notifier_count requires taking mmu_lock, which would defeat the
purpose of these shenanigans.
Okay; I wasn't sure if the problem was contention with
On 26/03/21 03:19, Sean Christopherson wrote:
+ /*
+* Reset the lock used to prevent memslot updates between MMU notifier
+* range_start and range_end. At this point no more MMU notifiers will
+* run, but the lock could still be held if KVM's notifier was removed
+
On 26/03/21 03:19, Sean Christopherson wrote:
Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}()
notifications. Because mmu_notifier_count must be modified while holding
mmu_lock for write, and must always be paired across start->end to stay
balanced, lock elision must happen in
Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}()
notifications. Because mmu_notifier_count must be modified while holding
mmu_lock for write, and must always be paired across start->end to stay
balanced, lock elision must happen in both or none. To meet that
requirement, add a
16 matches
Mail list logo