Re: [PATCH v2 0/5] Add virtio-iommu driver

2018-07-04 Thread Jean-Philippe Brucker
On 27/06/18 20:59, Michael S. Tsirkin wrote:
>> Another reason to keep the MMIO transport option is that one
>> virtio-iommu can manage DMA from endpoints on multiple PCI domains at
>> the same time, as well as platform devices. Some VMMs might want that,
>> in which case the IOMMU would be a separate platform device.
> 
> Which buses are managed by the IOMMU is separate from the bus
> on which it's programming interface resides.

Sorry I didn't get this. We probably don't want to instantiate a PCI
root complex just for the IOMMU, so it needs to be in the same PCI
segment as managed endpoints. For example in my VM the AMD IOMMU is
presented as 00:02.0, between other devices on PCI bus 00.

In any case, I have a solution for virtio-pci that works with DT and
ACPI, and isn't excessively awful. I'll probably send it as part of the
next version.

Thanks,
Jean

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/5] Add virtio-iommu driver

2018-06-28 Thread Peter Maydell
On 27 June 2018 at 20:59, Michael S. Tsirkin  wrote:
> My point was virtio mmio isn't widely used outside of ARM.
> Rather than try to make everyone use it, IMHO it's better
> to start with PCI.

Yes. I would much rather we let virtio-mmio disappear into
history as a random bit of legacy stuff. One of the things
I didn't like about the virtio-iommu design was that it
resurrected virtio-mmio as something we need to actively care
about again, so if there is a path forward that will work with
pci virtio I would much prefer that.

thanks
-- PMM
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/5] Add virtio-iommu driver

2018-06-27 Thread Michael S. Tsirkin
On Wed, Jun 27, 2018 at 07:04:46PM +0100, Jean-Philippe Brucker wrote:
> On 26/06/18 19:07, Michael S. Tsirkin wrote:
> > So as I pointed out new virtio 0 device isn't really welcome ;)
> 
> Agreed, virtio-iommu is expected to be implemented on virtio 1 and
> later. I'll remove the two legacy-related paragraph from the spec and
> add a check in the driver as you suggested, to avoid giving the wrong idea.
> 
> > No one bothered implementing virtio 1 in MMIO for all the work
> > that was put in defining it.
> 
> That is curious, given that the virtio_mmio driver supports virtio 1 and
> from my own experience, porting the MMIO transport to virtio 1 only
> requires updating a few registers, when ANY_LAYOUT, modern virtqueue and
> status are already implemented.
> 
> > The fact that the MMIO part of the
> > spec doesn't seem to allow for transitional devices might
> > or might not have something to do with this.
> 
> Sorry, which part do you have in mind? The spec does provide both a
> legacy and modern register layout, with version numbers to differentiate
> them.

Yes but there's no way to implement a transitional virtio mmio
device. The version is either "legacy" or "modern".

So if you implement a modern device old guests won't work with it.

> > So why make it an MMIO device at all? A system with an IOMMU
> > will have a PCI bus, won't it? And it's pretty common to
> > have the IOMMU be a PCI device on the root bus.
> Having the IOMMU outside PCI seems more common though. On Arm and Intel
> the IOMMU doesn't have a PCI config space, BARs or capabilities, just a
> plain MMIO region and a static number of interrupts. However the AMD
> IOMMU appears as a PCI device with additional MMIO registers. I would be
> interested in implementing virtio-iommu as a PCI dev at some point, at
> least so that we can use MSI-X.
> 
> The problem is that it requires invasive changes in the firmware
> interfaces and drivers. They need to describe relationship between IOMMU
> and endpoint, and DT or ACPI IORT don't expect the programming interface
> to be inside the PCI bus that the IOMMU manages.

They don't particularly care IMHO.

> Describing it as a
> peripheral is more natural. For AMD it is implemented in their driver
> using IVHD tables that can't be reused. Right now I don't expect any
> success in proposing changes to firmware interfaces or drivers, because
> the device is new and paravirtualized, and works out of the box with
> MMIO. Hopefully that will change in the future, perhaps when someone
> supports DT for AMD IOMMU (they do describe bindings at the end of the
> spec, but I don't think it can work in Linux with the existing
> infrastructure)

That's a bit abstract, I don't really understand the issues involved.
ACPI is formatted by QEMU, so firmware does not need to care.
And is there even a DT for intel?

> Another reason to keep the MMIO transport option is that one
> virtio-iommu can manage DMA from endpoints on multiple PCI domains at
> the same time, as well as platform devices. Some VMMs might want that,
> in which case the IOMMU would be a separate platform device.

Which buses are managed by the IOMMU is separate from the bus
on which it's programming interface resides.

> > Will remove need to bother with dt bindings etc.
> That's handled by the firmware drivers and IOMMU layer, there shouldn't
> be any special treatment at the virtio layer. In general removal of an
> IOMMU needs to be done after removal of all endpoints connected to it,
> which should be described using device_link from the driver core. DT or
> ACPI is only used to tell drivers where to find resources, and to
> describe the DMA topology.
> 
> Thanks,
> Jean

My point was virtio mmio isn't widely used outside of ARM.
Rather than try to make everyone use it, IMHO it's better
to start with PCI.

-- 
MST
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/5] Add virtio-iommu driver

2018-06-27 Thread Jean-Philippe Brucker
On 26/06/18 19:07, Michael S. Tsirkin wrote:
> So as I pointed out new virtio 0 device isn't really welcome ;)

Agreed, virtio-iommu is expected to be implemented on virtio 1 and
later. I'll remove the two legacy-related paragraph from the spec and
add a check in the driver as you suggested, to avoid giving the wrong idea.

> No one bothered implementing virtio 1 in MMIO for all the work
> that was put in defining it.

That is curious, given that the virtio_mmio driver supports virtio 1 and
from my own experience, porting the MMIO transport to virtio 1 only
requires updating a few registers, when ANY_LAYOUT, modern virtqueue and
status are already implemented.

> The fact that the MMIO part of the
> spec doesn't seem to allow for transitional devices might
> or might not have something to do with this.

Sorry, which part do you have in mind? The spec does provide both a
legacy and modern register layout, with version numbers to differentiate
them.

> So why make it an MMIO device at all? A system with an IOMMU
> will have a PCI bus, won't it? And it's pretty common to
> have the IOMMU be a PCI device on the root bus.
Having the IOMMU outside PCI seems more common though. On Arm and Intel
the IOMMU doesn't have a PCI config space, BARs or capabilities, just a
plain MMIO region and a static number of interrupts. However the AMD
IOMMU appears as a PCI device with additional MMIO registers. I would be
interested in implementing virtio-iommu as a PCI dev at some point, at
least so that we can use MSI-X.

The problem is that it requires invasive changes in the firmware
interfaces and drivers. They need to describe relationship between IOMMU
and endpoint, and DT or ACPI IORT don't expect the programming interface
to be inside the PCI bus that the IOMMU manages. Describing it as a
peripheral is more natural. For AMD it is implemented in their driver
using IVHD tables that can't be reused. Right now I don't expect any
success in proposing changes to firmware interfaces or drivers, because
the device is new and paravirtualized, and works out of the box with
MMIO. Hopefully that will change in the future, perhaps when someone
supports DT for AMD IOMMU (they do describe bindings at the end of the
spec, but I don't think it can work in Linux with the existing
infrastructure)

Another reason to keep the MMIO transport option is that one
virtio-iommu can manage DMA from endpoints on multiple PCI domains at
the same time, as well as platform devices. Some VMMs might want that,
in which case the IOMMU would be a separate platform device.

> Will remove need to bother with dt bindings etc.
That's handled by the firmware drivers and IOMMU layer, there shouldn't
be any special treatment at the virtio layer. In general removal of an
IOMMU needs to be done after removal of all endpoints connected to it,
which should be described using device_link from the driver core. DT or
ACPI is only used to tell drivers where to find resources, and to
describe the DMA topology.

Thanks,
Jean
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/5] Add virtio-iommu driver

2018-06-26 Thread Michael S. Tsirkin
On Thu, Jun 21, 2018 at 08:06:50PM +0100, Jean-Philippe Brucker wrote:
> Implement the base virtio-iommu driver, following version 0.7 of the
> specification [1].
> 
> Changes since last version [2]:
> * Address comments, thanks again for the review.
> * As suggested, add a DT binding description in patch 1.
> * Depend on VIRTIO_MMIO=y to fix a build failure¹
> * Switch to v0.7 of the spec, which changes resv_mem parameters and adds
>   an MMIO flag. These are trivial but not backward compatible. Once
>   device or driver is upstream, updates to the spec will rely on feature
>   bits to stay compatible with this code.
> * Implement the new tlb_sync interface, by splitting add_req() and
>   sync_req(). I noticed a small improvement on netperf stream because
>   the synchronous iommu_unmap() also benefits from this. Other
>   experiments, such as using kmem_cache for requests instead of kmalloc,
>   didn't show any improvement.
> 
> Driver is available on branch virtio-iommu/v0.7 [3], and the x86+ACPI
> prototype is on branch virtio-iommu/devel. That x86 code hasn't changed,
> it still requires the DMA ifdeffery and I lack the expertise to tidy it
> up. The kvmtool example device has been cleaned up and is available on
> branch virtio-iommu/v0.7 [4].
> 
> Feedback welcome!
> 
> Thanks,
> Jean

So as I pointed out new virtio 0 device isn't really welcome ;)
No one bothered implementing virtio 1 in MMIO for all the work
that was put in defining it. The fact that the MMIO part of the
spec doesn't seem to allow for transitional devices might
or might not have something to do with this.

So why make it an MMIO device at all? A system with an IOMMU
will have a PCI bus, won't it? And it's pretty common to
have the IOMMU be a PCI device on the root bus.
Will remove need to bother with dt bindings etc.


> [1] virtio-iommu specification v0.7
> https://www.spinics.net/lists/linux-virtualization/msg34127.html
> [2] virtio-iommu driver v1
> https://www.spinics.net/lists/kvm/msg164322.html
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.7
> 
> http://www.linux-arm.org/git?p=linux-jpb.git;a=shortlog;h=refs/heads/virtio-iommu/v0.7
> [4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.7 
> 
>   ---
> ¹ A word on the module story. Because of complex dependencies IOMMU
> drivers cannot yet be .ko modules. Enabling it is outside the scope of
> this series but I have a small prototype on branch virtio-iommu/
> module-devel. It seems desirable since some distros currently ship the
> transport code as module and are unlikely to change this on our account.
> Note that this series works fine with arm64 defconfig, which selects
> VIRTIO_MMIO=y.
> 
> I could use some help to clean this up. Currently my solution is to split
> virtio-iommu into a module and a 3-lines built-in stub, which isn't
> graceful but could have been worse.
> 
> Keeping the whole virtio-iommu driver as builtin would require accessing
> any virtio utility through get_symbol. So far we only have seven of
> those and could keep a list of pointer ops, but I find this solution
> terrible. If virtio or virtio_mmio is a module, virtio-iommu also needs
> to be one.
> 
> The minimal set of changes to make a module out of an OF-based IOMMU
> driver seems to be:
> * Export IOMMU symbols used by drivers
> * Don't give up deferring probe in of_iommu
> * Move IOMMU OF tables to .rodata
> * Create a static virtio-iommu stub that declares the virtio-iommu OF
>   table entry. The build system doesn't pick up IOMMU_OF_DECLARE when
>   done from an object destined to be built as module :(
> * Create a device_link between endpoint and IOMMU, to ensure that
>   removing the IOMMU driver also unbinds the endpoint driver. Putting
>   this in IOMMU core seems like a better approach since even when not
>   built as module, unbinding an IOMMU device via sysfs will cause
>   crashes.
> 
> With this, as long as virtio-mmio isn't loaded, the OF code defers probe
> of endpoints that depend on virtio-iommu. Once virtio-mmio is loaded,
> the probe is still deferred until virtio-iommu registers itself to the
> virtio bus. Once virtio-iommu is loaded, probe of endpoints managed by
> the IOMMU follows.
> 
> I'll investigate ACPI IORT when I find some time, though I don't expect
> much complication and suggest we tackle one problem at a time. Since it
> is a luxury that requires changes to the IOMMU core, module support is
> left as a future improvement.
>   ---
> 
> Jean-Philippe Brucker (5):
>   dt-bindings: virtio: Specify #iommu-cells value for a virtio-iommu
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
>   vfio: Allow type-1 IOMMU instantiation for ARM
> 
>  .../devicetree/bindings/virtio/mmio.txt   |8 +
>  MAINTAINERS   |6 +
>  drivers/iommu/Kconfig |   11 +
>  drivers/iommu/Make