On Fri, Jun 04, 2021 at 06:37:27AM +0000, Oleksandr Andrushchenko wrote:
> Hi, all!
> 
> While working on PCI SR-IOV support for ARM I started porting [1] on top
> of current PCI on ARM support [2]. The question I have for this series
> is if we really need emulating SR-IOV code in Xen?
> 
> I have implemented a PoC for SR-IOV on ARM [3] (please see the top 2 
> patches)
> and it "works for me": MSI support is still WIP, but I was able to see that
> VFs are properly seen in the guest and BARs are properly programmed in p2m.
> 
> What I can't fully understand is if we can live with this approach or there
> are use-cases I can't see.
> 
> Previously I've been told that this approach might not work on FreeBSD 
> running
> as Domain-0, but is seems that "PCI Passthrough is not supported 
> (Xen/FreeBSD)"
> anyways [4].

PCI passthorgh is not supported on FreeBSD dom0 because PCI
passthrough is not supported by Xen itself when using a PVH dom0, and
that's the only mode FreeBSD dom0 can use.

PHYSDEVOP_pci_device_add can be added to FreeBSD, so it could be made
to work. I however think this is not the proper way to implement
SR-IOV support.

> 
> I also see ACRN hypervisor [5] implements SR-IOV inside it which makes 
> me think I
> miss some important use-case on x86 though.
> 
> I would like to ask for any advise with SR-IOV in hypervisor respect, 
> any pointers
> to documentation or any other source which might be handy in deciding if 
> we do
> need SR-IOV complexity in Xen.
> 
> And it does bring complexity if you compare [1] and [3])...
> 
> A bit of technical details on the approach implemented [3]:
> 1. We rely on PHYSDEVOP_pci_device_add
> 2. We rely on Domain-0 SR-IOV drivers to instantiate VFs
> 3. BARs are programmed in p2m implementing guest view on those (we have 
> extended
> vPCI code for that and this path is used for both "normal" devices and 
> VFs the same way)
> 4. No need to trap PCI_SRIOV_CTRL
> 5. No need to wait 100ms in Xen before attempting to access VF registers 
> when
> enabling virtual functions on the PF - this is handled by Domain-0 itself.

I think the SR-IOV capability should be handled like any other PCI
capability, ie: like we currently handle MSI or MSI-X in vPCI.

It's likely that using some kind of hypercall in order to deal with
SR-IOV could make this easier to implement in Xen, but that just adds
more code to all OSes that want to run as the hardware domain.

OTOH if we properly trap accesses to the SR-IOV capability (like it
was proposed in [1] from your references) we won't have to modify OSes
that want to run as hardware domains in order to handle SR-IOV devices.

IMO going for the hypercall option seems easier now, but adds a burden
to all OSes that want to manage SR-IOV devices that will hurt us long
term.

Thanks, Roger.

> Thank you in advance,
> Oleksandr
> 
> [1] 
> https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01494.html
> [2] 
> https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
> [3] https://github.com/xen-troops/xen/commits/pci_phase2
> [4] https://wiki.freebsd.org/Xen
> [5] https://projectacrn.github.io/latest/tutorials/sriov_virtualization.html

Reply via email to