hi Julien, Oleksandr,
[..]
This patch series only covers use-cases where the device emulator
handles the *entire* PCI Host bridge and PCI (virtio-pci) devices behind
it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI
pass-through resources, handling, accounting, nothing.
I understood you want to one Device Emulator to handle the entire PCI
host bridge. But...
From the
hypervisor we only need a help to intercept the config space accesses
happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ...
GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and
forward them to the linked device emulator (if any), that's all.
... I really don't see why you need to add code in Xen to trap the
region. If QEMU is dealing with the hostbridge, then it should be able
to register the MMIO region and then do the translation.
[..]
I am afraid, we cannot end up exposing only single PCI Host bridge with
current model (if we use device emulators running in different domains
that handles the *entire* PCI Host bridges), this won't work.
That makes sense and it is fine. But see above, I think only the #2 is
necessary for the hypervisor. Patch #5 should not be necessary at all.
[...]
I did checks w/o patch #5 and can confirm that indeed -- qemu & xen can
do this work without additional modifications to qemu code. So I'll drop
this patch from this series.
[..]
+/*
+ * 16 MB is reserved for virtio-pci configuration space based on
calculation
+ * 8 bridges * 2 buses x 32 devices x 8 functions x 4 KB = 16 MB
Can you explain how youd ecided the "2"?
good question, we have a limited free space available in memory layout
(we had difficulties to find a suitable holes) also we don't expect a
lot of virtio-pci devices, so "256" used vPCI would be too much. It was
decided to reduce significantly, but select maximum to fit into free
space, with having "2" buses we still fit into the chosen holes.
If you don't expect a lot of virtio devices, then why do you need two
buses? Wouldn't one be sufficient?
one should be reasonably sufficient, I agree
+ */
+#define GUEST_VIRTIO_PCI_ECAM_BASE xen_mk_ullong(0x33000000)
+#define GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE xen_mk_ullong(0x01000000)
+#define GUEST_VIRTIO_PCI_HOST_ECAM_SIZE xen_mk_ullong(0x00200000)
+
+/* 64 MB is reserved for virtio-pci memory */
+#define GUEST_VIRTIO_PCI_ADDR_TYPE_MEM xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_PCI_MEM_ADDR xen_mk_ullong(0x34000000)
+#define GUEST_VIRTIO_PCI_MEM_SIZE xen_mk_ullong(0x04000000)
+
/*
* 16MB == 4096 pages reserved for guest to use as a region to
map its
* grant table in.
@@ -476,6 +489,11 @@ typedef uint64_t xen_callback_t;
#define GUEST_MAGIC_BASE xen_mk_ullong(0x39000000)
#define GUEST_MAGIC_SIZE xen_mk_ullong(0x01000000)
+/* 64 MB is reserved for virtio-pci Prefetch memory */
This doesn't seem a lot depending on your use case. Can you details how
you can up with "64 MB"?
the same calculation as it was done configuration space. It was observed
that only 16K is used per virtio-pci device (maybe it can be bigger for
usual PCI device, I don't know). Please look at the example of DomU log
below (to strings that contain "*BAR 4: assigned*"):
What about virtio-gpu? I would expect a bit more memory is necessary for
that use case.
Any case, what I am looking for is for some explanation in the commit
message of the limits. I don't particularly care about the exact limit
because this is not part of a stable ABI.
sure, I'll put a bit more explanation in both comment and commit
message. Should I post updated patch series, with updated resources and
without patch #5, or shall we wait for some more comments here?
--
regards,
Sergiy