On Fri, Apr 21, 2017 at 05:23:34PM +0100, Paul Durrant wrote: > > -----Original Message----- > > From: Roger Pau Monne [mailto:roger....@citrix.com] [...] > > +int xen_vpci_read(unsigned int seg, unsigned int bus, unsigned int devfn, > > + unsigned int reg, uint32_t size, uint32_t *data) > > +{ > > + struct domain *d = current->domain; > > + struct pci_dev *pdev; > > + const struct vpci_register *r; > > + union vpci_val val = { .double_word = 0 }; > > + unsigned int data_rshift = 0, data_lshift = 0, data_size; > > + uint32_t tmp_data; > > + int rc; > > + > > + ASSERT(vpci_locked(d)); > > + > > + *data = 0; > > + > > + /* Find the PCI dev matching the address. */ > > + pdev = pci_get_pdev_by_domain(d, seg, bus, devfn); > > + if ( !pdev ) > > + goto passthrough; > > I hope this can eventually be generalised so I wonder what your intention is > regarding co-existence between Xen emulated PCI config space, pass-through > and PCI devices emulated externally. We already have a framework for > registering PCI devices by SBDF but this code seems to make no use of it, > which I suspect is likely to cause future conflict.
Yes, the long term aim is to use this code in order to implement PCI-passthrough for PVH and HVM DomUs also. TBH, I didn't know we already had such code (I assume you mean the IOREQ related PCI code). As it is, I see a couple of issues with that, the first one is that this code expects a ioreq client on the other end, and the code I'm adding here is all inside of the hypervisor. The second issue is that the IOREQ code ATM only allows for local PCI accesses, which means I should extend it to also deal with ECAM/MMCFG areas. I completely agree that at some point this should be made to work together, but I'm not sure if it would be better to do that once we want to also use vPCI for DomUs, so that the Dom0 side is not delayed further. Roger. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel