On Tue, 2014-09-02 at 16:56 -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 03, 2014 at 06:53:33AM +1000, Benjamin Herrenschmidt wrote:
> > On Mon, 2014-09-01 at 22:55 -0700, Andy Lutomirski wrote:
> > > 
> > > On x86, at least, I doubt that we'll ever see a physically addressed
> > > PCI virtio device for which ACPI advertises an IOMMU, since any sane
> > > hypervisor will just not advertise an IOMMU for the virtio device.
> > > But are there arm64 or PPC guests that use virtio_pci, that have
> > > IOMMUs, and that will malfunction if the virtio_pci driver ends up
> > > using the IOMMU?  I certainly hope not, since these systems might be
> > > very hard-pressed to work right if someone plugged in a physical
> > > virtio-speaking PCI device.
> > 
> > It will definitely not work on ppc64. We always have IOMMUs on pseries,
> > all PCI busses do, and because it's a paravirtualized environment,
> > napping/unmapping pages means hypercalls -> expensive.
> > 
> > But our virtio implementation bypasses it in qemu, so if virtio-pci
> > starts using the DMA mapping API without changing the DMA ops under the
> > hood, it will break for us.
> 
> What is the default dma_ops that the Linux guests start with as
> guests under ppc64?

On pseries (which is what we care the most about nowadays) it's
dma_iommu_ops() which in turn call into the "TCE" code for populating
the IOMMU entries which calls the hypervisor.

Cheers,
Ben.

> Thanks!
> > 
> > Cheers,
> > Ben.
> > 
> > 


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to