Hi Radim,

On 7/13/16 21:29, Radim Krčmář wrote:
2016-07-13 08:20-0500, Suravee Suthikulpanit:
>From: Suravee Suthikulpanit<suravee.suthikulpa...@amd.com>
>
>This patch introduces avic_ga_log_notifier, which will be called
>by IOMMU driver whenever it handles the Guest vAPIC (GA) log entry.
>
>Signed-off-by: Suravee Suthikulpanit<suravee.suthikulpa...@amd.com>
>---
>  arch/x86/include/asm/kvm_host.h |  2 ++
>  arch/x86/kvm/svm.c              | 68 
+++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 68 insertions(+), 2 deletions(-)
>
>diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>index 69e62862..a9466ad 100644
>--- a/arch/x86/include/asm/kvm_host.h
>+++ b/arch/x86/include/asm/kvm_host.h
>@@ -776,9 +776,11 @@ struct kvm_arch {
>    bool disabled_lapic_found;
>
>    /* Struct members for AVIC */
>+   u32 avic_vm_id;
>    u32 ldr_mode;
>    struct page *avic_logical_id_table_page;
>    struct page *avic_physical_id_table_page;
>+   struct hlist_node hnode;
>  };
>
>  struct kvm_vm_stat {
>diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>index 16ef31b..1d9f2f6 100644
>--- a/arch/x86/kvm/svm.c
>+++ b/arch/x86/kvm/svm.c
>@@ -34,6 +34,8 @@
>  #include <linux/sched.h>
>  #include <linux/trace_events.h>
>  #include <linux/slab.h>
>+#include <linux/amd-iommu.h>
>+#include <linux/hashtable.h>
>
>  #include <asm/apic.h>
>  #include <asm/perf_event.h>
>@@ -928,6 +930,53 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
>    set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
>  }
>
>+/* Note:
>+ * This hash table is used to map VM_ID to a struct kvm_arch,
>+ * when handling AMD IOMMU GALOG notification to schedule in
>+ * a particular vCPU.
>+ */
>+#define SVM_VM_DATA_HASH_BITS      8
>+DECLARE_HASHTABLE(svm_vm_data_hash, SVM_VM_DATA_HASH_BITS);
>+static spinlock_t svm_vm_data_hash_lock;
>+
>+/* Note:
>+ * This function is called from IOMMU driver to notify
>+ * SVM to schedule in a particular vCPU of a particular VM.
>+ */
>+static int avic_ga_log_notifier(int vm_id, int vcpu_id)
>+{
>+   unsigned long flags;
>+   struct kvm_arch *ka = NULL;
>+   struct kvm_vcpu *vcpu = NULL;
>+   struct vcpu_svm *svm = NULL;
>+
>+   pr_debug("SVM: %s: vm_id=%#x, vcpu_id=%#x\n", __func__, vm_id, vcpu_id);
>+
>+   spin_lock_irqsave(&svm_vm_data_hash_lock, flags);
>+   hash_for_each_possible(svm_vm_data_hash, ka, hnode, vm_id) {
>+           struct kvm *kvm = container_of(ka, struct kvm, arch);
>+
>+           vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id);
The first result is not neccessarily the correct one.

With more than active 256 VMs, there is a guaranteed collision that
cannot be disambiguated, so VCPUs in both VMs need to be woken up.

Having a 24 bit vm_id and checking that
   kvm->*.avic_id & 0xfffff == vm_id
would help a bit to avoid useless wakeups, but the collision cannot be
avoided.

True. What if SVM guarantee that the VM_ID won't conflict b/w any two active VMs?

Thanks
Suravee.

Reply via email to