On Wed, Sep 30, 2020 at 12:41:08PM +0200, Roger Pau Monne wrote:
> Introduce a per virtual timer lock that replaces the existing per-vCPU
> and per-domain vPT locks. Since virtual timers are no longer assigned
> or migrated between vCPUs the locking can be simplified to a
> in-structure spinlock that protects all the fields.
> 
> This requires introducing a helper to initialize the spinlock, and
> that could be used to initialize other virtual timer fields in the
> future.
> 
> Signed-off-by: Roger Pau Monné <roger....@citrix.com>

Just realized I had the following uncommitted chunk that should have
been part of this patch, nothing critical but the tm_lock can now be
removed.

---8<---
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7cb4511b60..7daca3f85c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1554,8 +1554,6 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
     hvm_asid_flush_vcpu(v);
 
-    spin_lock_init(&v->arch.hvm.tm_lock);
-
     rc = hvm_vcpu_cacheattr_init(v); /* teardown: vcpu_cacheattr_destroy */
     if ( rc != 0 )
         goto fail1;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 07a9890ed3..6e3bdef5bc 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -166,9 +166,6 @@ struct hvm_vcpu {
     s64                 cache_tsc_offset;
     u64                 guest_time;
 
-    /* Lock for virtual platform timers. */
-    spinlock_t          tm_lock;
-
     bool                flag_dr_dirty;
     bool                debug_state_latch;
     bool                single_step;


Reply via email to