* Laurent Vivier (lviv...@redhat.com) wrote: > On 18/10/2019 10:16, Dr. David Alan Gilbert wrote: > > * Scott Cheloha (chel...@linux.vnet.ibm.com) wrote: > >> savevm_state's SaveStateEntry TAILQ is a priority queue. Priority > >> sorting is maintained by searching from head to tail for a suitable > >> insertion spot. Insertion is thus an O(n) operation. > >> > >> If we instead keep track of the head of each priority's subqueue > >> within that larger queue we can reduce this operation to O(1) time. > >> > >> savevm_state_handler_remove() becomes slightly more complex to > >> accomodate these gains: we need to replace the head of a priority's > >> subqueue when removing it. > >> > >> With O(1) insertion, booting VMs with many SaveStateEntry objects is > >> more plausible. For example, a ppc64 VM with maxmem=8T has 40000 such > >> objects to insert. > > > > Separate from reviewing this patch, I'd like to understand why you've > > got 40000 objects. This feels very very wrong and is likely to cause > > problems to random other bits of qemu as well. > > I think the 40000 objects are the "dr-connectors" that are used to plug > peripherals (memory, pci card, cpus, ...).
Yes, Scott confirmed that in the reply to the previous version. IMHO nothing in qemu is designed to deal with that many devices/objects - I'm sure that something other than the migration code is going to get upset. Is perhaps the structure wrong somewhere - should there be a single DRC device that knows about all DRCs? Dave > https://github.com/qemu/qemu/blob/master/hw/ppc/spapr_drc.c > > They are part of SPAPR specification. > > https://raw.githubusercontent.com/qemu/qemu/master/docs/specs/ppc-spapr-hotplug.txt > > CC Michael Roth > > Thanks, > Laurent -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK