Han, Weidong wrote:
[Rebased the patch due to my mmio's patch (commit: 0d679782) was checked
in]

From 9e68fc762358cc44cfec3968ac5ec65324ce04d7 Mon Sep 17 00:00:00 2001
From: Weidong Han <[EMAIL PROTECTED]>
Date: Mon, 6 Oct 2008 14:02:18 +0800
Subject: [PATCH] Support multiple device assignment to one guest

Current VT-d patches in kvm only support one device assignment to one
guest due to dmar_domain is per device.

In order to support multiple device assignemnt, this patch wraps
dmar_domain with a reference count (kvm_vtd_domain), and also adds a
pointer in kvm_assigned_dev_kernel to link to a kvm_vtd_domain.

Each dmar_domain owns one VT-d page table, in order to reduce page
tables and improve IOTLB utility, the devices assigned to the same guest
and under the same IOMMU share the same kvm_vtd_domain.


I don't understand this. If we have a one dmar domain per guest, why do we need reference counting at all?

We can create the dmar domain when we assign the first device, and destroy it when we deassign the last device, but otherwise I don't see a need for changes. Particularly I don't understand this:

@@ -351,7 +351,6 @@ struct kvm_arch{
         */
        struct list_head active_mmu_pages;
        struct list_head assigned_dev_head;
-       struct dmar_domain *intel_iommu_domain;
        struct kvm_pic *vpic;
        struct kvm_ioapic *vioapic;
        struct kvm_pit *vpit;
@@ -305,6 +310,7 @@ struct kvm_assigned_dev_kernel {
        int irq_requested;
        struct pci_dev *dev;
        struct kvm *kvm;
+       struct kvm_vtd_domain *vtd_domain;
 };

Oh, I see it now.  Different devices may need to go under different iommus.

This really feels like it should be handled by the iommu API. Users shouldn't need to bother with it.

Joerg, can your dma api handle this?

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to