On Wed, 7 Apr 2021 09:29:22 -0400 "Michael S. Tsirkin" <m...@redhat.com> wrote:
> On Tue, Apr 06, 2021 at 08:15:46PM +0200, Igor Mammedov wrote: > > On Tue, 6 Apr 2021 16:07:25 +0100 > > Daniel P. Berrangé <berra...@redhat.com> wrote: > > > > > On Tue, Apr 06, 2021 at 03:54:24PM +0100, Daniel P. Berrangé wrote: > > > > On Mon, Mar 22, 2021 at 07:00:18PM -0400, Michael S. Tsirkin wrote: > > > > > From: Igor Mammedov <imamm...@redhat.com> > > > > > > > > > > it helps to avoid device naming conflicts when guest OS is > > > > > configured to use acpi-index for naming. > > > > > Spec ialso says so: > > > > > > > > > > PCI Firmware Specification Revision 3.2 > > > > > 4.6.7. _DSM for Naming a PCI or PCI Express Device Under Operating > > > > > Systems > > > > > " > > > > > Instance number must be unique under \_SB scope. This instance number > > > > > does not have to > > > > > be sequential in a given system configuration. > > > > > " > > > > > > > > > > Signed-off-by: Igor Mammedov <imamm...@redhat.com> > > > > > Message-Id: <20210315180102.3008391-4-imamm...@redhat.com> > > > > > Reviewed-by: Michael S. Tsirkin <m...@redhat.com> > > > > > Signed-off-by: Michael S. Tsirkin <m...@redhat.com> > > > > > --- > > > > > hw/acpi/pcihp.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 46 insertions(+) > > > > > > > > > > diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c > > > > > index ceab287bd3..f4cb3c979d 100644 > > > > > --- a/hw/acpi/pcihp.c > > > > > +++ b/hw/acpi/pcihp.c > > > > > @@ -52,6 +52,21 @@ typedef struct AcpiPciHpFind { > > > > > PCIBus *bus; > > > > > } AcpiPciHpFind; > > > > > > > > > > +static gint g_cmp_uint32(gconstpointer a, gconstpointer b, gpointer > > > > > user_data) > > > > > +{ > > > > > + return a - b; > > > > > +} > > > > > + > > > > > +static GSequence *pci_acpi_index_list(void) > > > > > +{ > > > > > + static GSequence *used_acpi_index_list; > > > > > + > > > > > + if (!used_acpi_index_list) { > > > > > + used_acpi_index_list = g_sequence_new(NULL); > > > > > + } > > > > > + return used_acpi_index_list; > > > > > +} > > > > > + > > > > > static int acpi_pcihp_get_bsel(PCIBus *bus) > > > > > { > > > > > Error *local_err = NULL; > > > > > @@ -277,6 +292,23 @@ void > > > > > acpi_pcihp_device_pre_plug_cb(HotplugHandler *hotplug_dev, > > > > > ONBOARD_INDEX_MAX); > > > > > return; > > > > > } > > > > > + > > > > > + /* > > > > > + * make sure that acpi-index is unique across all present PCI > > > > > devices > > > > > + */ > > > > > + if (pdev->acpi_index) { > > > > > + GSequence *used_indexes = pci_acpi_index_list(); > > > > > + > > > > > + if (g_sequence_lookup(used_indexes, > > > > > GINT_TO_POINTER(pdev->acpi_index), > > > > > + g_cmp_uint32, NULL)) { > > > > > + error_setg(errp, "a PCI device with acpi-index = %" > > > > > PRIu32 > > > > > + " already exist", pdev->acpi_index); > > > > > + return; > > > > > + } > > > > > + g_sequence_insert_sorted(used_indexes, > > > > > + GINT_TO_POINTER(pdev->acpi_index), > > > > > + g_cmp_uint32, NULL); > > > > > + } > > > > > > > > This doesn't appear to ensure uniqueness when using PCIe topologies: > > > > > > > > $ ./build/x86_64-softmmu/qemu-system-x86_64 \ > > > > -device virtio-net,acpi-index=100 \ > > > > -device virtio-net,acpi-index=100 > > > > qemu-system-x86_64: -device virtio-net,acpi-index=100: a PCI device > > > > with acpi-index = 100 already exist > > > > > > > > $ ./build/x86_64-softmmu/qemu-system-x86_64 \ > > > > -M q35 \ > > > > -device virtio-net,acpi-index=100 > > > > -device virtio-net,acpi-index=100 > > > > ....happily running.... > > > > > > In fact the entire concept doesn't appear to work with Q35 at all as > > > implemented. > > > > > > The 'acpi_index' file in the guest OS never gets created and the NICs > > > are still called 'eth0', 'eth1' > > > > > > Only with i440fx can I can the "enoNNN" based naming to work with > > > acpi-index set from QEMU > > > > It is not supported on Q35 yet as it depends on ACPI PCI hotplug > > infrastructure. > > Once Julia is done with porting it to Q35, acpi-index will be pulled along > > with it. > > > Right. But for now, should we make it fail instead of being ignored silently? > If we don't how will managament find out it's not really supported? > And if we make it fail how will management then find out when it's finally > supported? I had an idea to add capability flag to MachineInfo in QMP schema and then do ugly check from PCIDevice.realize() 1) if (acpi_index!=0 && current_machine->has_pci_acpi_index) error out However Daniel said that he didn't think that MachineInfo was the right place for it. Problem is that we can't check acpi-index unsupported configuration at PCIDevice.realize() time since we don't know about availability of the feature before first reset event that overrides native PCI hot-plug (SHPC or PCI-E) with ACPI one if it's enabled. Which is too late, since all devices are already created. Neither seems right to do check at PCIDevice.reset() time, as *) it would depend if device hosting ACPI hotplug were reset first *) make every PCI device query for ACPI hotplug controller which is the same current_machine->has_pci_acpi_index only uglier Hence acpi-index is just ignored on machines that do not support it. I don't see any good option to do this check without refactoring ACPI hotplug the way where it's enabled at device creation time. (I think Julia had similar issues with creation/reset ordering in her last Q35 ACPI PCI hotplug series) Any suggestions are welcome. As a quick ugly temporary solution it could be MachineInfo QAPI schema flag or (PC)Machine property with [1] check. After all, It's a board feature and should originate from there (instead of 'random' acpi hw we decided abuse as hotplug controller), and later we can re-factor it internally to propagate flag along PCI hierarchy properly (but external probing will stay the same). PS: I also didn't consider rising a error in mixed configurations, where only some of bridges support ACPI hotplug while some use native one. So that's something to work on. > > > Regards, > > > Daniel >