On Thu, Jan 28, 2010 at 12:37:57PM +0100, Joerg Roedel wrote: > This patch changes the implementation of of > kvm_iommu_map_pages to map the pages with the host page size > into the io virtual address space. > > Signed-off-by: Joerg Roedel <joerg.roe...@amd.com> > --- > virt/kvm/iommu.c | 106 ++++++++++++++++++++++++++++++++++++++++++----------- > 1 files changed, 84 insertions(+), 22 deletions(-) > > diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c > index 65a5143..92a434d 100644 > --- a/virt/kvm/iommu.c > +++ b/virt/kvm/iommu.c > @@ -32,12 +32,27 @@ static int kvm_iommu_unmap_memslots(struct kvm *kvm); > static void kvm_iommu_put_pages(struct kvm *kvm, > gfn_t base_gfn, unsigned long npages); > > +static pfn_t kvm_pin_pages(struct kvm *kvm, struct kvm_memory_slot *slot, > + gfn_t gfn, unsigned long size) > +{ > + gfn_t end_gfn; > + pfn_t pfn; > + > + pfn = gfn_to_pfn_memslot(kvm, slot, gfn);
If gfn_to_pfn_memslot returns pfn of bad_page, you might create a large iommu translation for it? > + /* Map into IO address space */ > + r = iommu_map(domain, gfn_to_gpa(gfn), pfn_to_hpa(pfn), > + get_order(page_size), flags); > + > + gfn += page_size >> PAGE_SHIFT; Should increase gfn after checking for failure, otherwise wrong npages is passed to kvm_iommu_put_pages. > > - pfn = gfn_to_pfn_memslot(kvm, slot, gfn); > - r = iommu_map_range(domain, > - gfn_to_gpa(gfn), > - pfn_to_hpa(pfn), > - PAGE_SIZE, flags); > if (r) { > printk(KERN_ERR "kvm_iommu_map_address:" > "iommu failed to map pfn=%lx\n", pfn); > goto unmap_pages; > } > - gfn++; > + > } > + > return 0; > > unmap_pages: > - kvm_iommu_put_pages(kvm, slot->base_gfn, i); > + kvm_iommu_put_pages(kvm, slot->base_gfn, gfn); > return r; > } -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html