On 12/10/2015 01:15 AM, Bharata B Rao wrote:
> Hi,
> 
> This is an attempt to define a generic CPU device that serves as a
> containing device to underlying arch-specific CPU devices. The motivation
> for this is to have an arch-neutral way to specify CPUs mainly during
> hotplug.
> 
> Instead of individual archs having their own semantics to specify the
> CPU like
> 
> -device POWER8-powerpc64-cpu (pseries)
> -device qemu64-x86_64-cpu (pc)
> -device s390-cpu (s390)
> 
> this patch introduces a new device named cpu-core that could be
> used for all target archs as
> 
> -device cpu-core,socket="sid"
> 
> This adds a CPU core with all its associated threads into the specified
> socket with id "sid". The number of target architecture specific CPU threads
> that get created during this operation is based on the CPU topology specified
> using -smp sockets=S,cores=C,threads=T option. Also the number of cores that
> can be accommodated in the same socket is dictated by the cores= parameter
> in the same -smp option.
> 
> CPU sockets are represented by QOM objects and the number of sockets required
> to fit in max_cpus are created at boottime. As cpu-core devices are
> created, they are linked to socket object specified by socket="sid" device
> property.
> 
> Thus the model consists of backend socket objects which can be considered
> as container of one or more cpu-core devices. Each cpu-core object is
> linked to the appropriate backend socket object. Each CPU thread device
> appears as child object of cpu-core device.
> 
> All the required socket objects are created upfront and they can't be deleted.
> Though currently socket objects can be created using object_add monitor
> command, I am planning to prevent that so that a guest boots with the
> required number of sockets and only CPU cores can be hotplugged into
> them.
> 
> CPU hotplug granularity
> -----------------------
> CPU hotplug will now be done in cpu-core device granularity.
> 
> This patchset includes a patch to prevent topologies that result in
> partially filled cores. Hence with this patchset, we will always
> have fully filled cpu-core devices both for boot time and during hotplug.
> 
> For archs like PowerPC, where there is no requirement to be fully
> similar to the physical system, hotplugging CPU at core granularity
> is common. While core level hotplug will fit in naturally for such
> archs, for others which want socket level hotplug, could higher level
> tools like libvirt perform multiple core hotplugs in response to one
> socket hotplug request ?
> 
> Are there archs that would need thread level CPU addition ?
> 
> Boot time CPUs as cpu-core devices
> ----------------------------------
> In this patchset, I am coverting the boot time CPU initialization
> (from -smp option) to initialize the required number of cpu-core
> devices and linking them with the appropriate socket objects.
> 
> Initially I thought we should be able to completely replace -smp with
> -device cpu-core, but then I realized that at least both x86 and pseries
> guests' machine init code has dependencies on first CPU being available
> for the machine init code to work correctly.
> 
> Currently I have converted boot CPUs to cpu-core devices only PowerPC sPAPR
> and i386 PC targets. I am not really sure about the i386 changes and the
> intention in this iteration was to check if it is indeed possible to
> fit i386 into cpu-core model. Having said that I am able to boot an x86
> guest with this patchset.

I attempted a quick conversion for s390 to using cpu-core, but looks
like we'd have an issue preventing s390 from using cpu-core immediately
-- it relies on cpu_generic_init, which s390 specifically avoids today
because we don't have support for cpu_models.  Not sure if other
architectures will have the same issue.

I agree with Igor's sentiment of separating the issue of device_add
hotplug vs generic QOM view -- s390 could support device_add/del for
s390-cpu now, but the addition of cpu-core just adds more requirements
before we can allow for hotplug, without providing any immediate benefit
since s390 doesn't currently surface any topology info to the guest.

Matt

> 
> NUMA
> ----
> TODO: In this patchset, I haven't explicitly done anything for NUMA yet.
> I am thinking if we could add node=N option to cpu-core device.
> That could specify the NUMA node to which the CPU core belongs to.
> 
> -device cpu-core,socket="sid",node=N
> 
> QOM composition tree
> ---------------------
> QOM composition tree for x86 where I don't have CPU hotplug enabled, but
> just initializing boot CPUs as cpu-core devices appears like this:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16
> 
> /machine (pc-i440fx-2.5-machine)
>   /unattached (container)
>     /device[0] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
>     /device[4] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
> 
> For PowerPC where I have CPU hotplug enabled:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16 -device 
> cpu-core,socket=cpu-socket1,id=core3
> 
> /machine (pseries-2.5-machine)
>   /unattached (container)
>     /device[1] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>     /device[2] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>   /peripheral (container)
>     /core3 (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
> 
> As can be seen, the boot CPU and hotplugged CPU come under separate
> parents. Guess I should work towards getting both boot time and hotplugged
> CPUs under same parent ?
> 
> Socket ID generation
> ---------------------
> In the current approach the socket ID generation is implicit somewhat.
> All the sockets objects are created with pre-fixed format for ids like
> cpu-socket0, cpu-socket1 etc. And machine init code of each arch is expected
> to use the same when creating cpu-core devices to link the core to the
> right object. Even user needs to know these IDs during device_add time.
> May be I could add "info cpu-sockets" which gives information about all
> the existing sockets and their core-occupancy status.
> 
> Finally, I understand that this is a simplistic model and it wouldn't probably
> support all the notions around CPU topology and hotplug that we would
> like to support for all archs. The intention of this RFC is to start
> with somewhere and seek inputs from the community.
> 
> Bharata B Rao (9):
>   vl: Don't allow CPU toplogies with partially filled cores
>   cpu: Store CPU typename in MachineState
>   cpu: Don't realize CPU from cpu_generic_init()
>   cpu: CPU socket backend
>   vl: Create CPU socket backend objects
>   cpu: Introduce CPU core device
>   spapr: Convert boot CPUs into CPU core device initialization
>   target-i386: Set apic_id during CPU initfn
>   pc: Convert boot CPUs into CPU core device initialization
> 
>  hw/cpu/Makefile.objs        |  1 +
>  hw/cpu/core.c               | 98 
> +++++++++++++++++++++++++++++++++++++++++++++
>  hw/cpu/socket.c             | 48 ++++++++++++++++++++++
>  hw/i386/pc.c                | 64 +++++++++--------------------
>  hw/ppc/spapr.c              | 32 ++++++++++-----
>  include/hw/boards.h         |  1 +
>  include/hw/cpu/core.h       | 28 +++++++++++++
>  include/hw/cpu/socket.h     | 26 ++++++++++++
>  qom/cpu.c                   |  6 ---
>  target-arm/helper.c         | 16 +++++++-
>  target-cris/cpu.c           | 16 +++++++-
>  target-i386/cpu.c           | 37 ++++++++++++++++-
>  target-i386/cpu.h           |  1 +
>  target-lm32/helper.c        | 16 +++++++-
>  target-moxie/cpu.c          | 16 +++++++-
>  target-openrisc/cpu.c       | 16 +++++++-
>  target-ppc/translate_init.c | 16 +++++++-
>  target-sh4/cpu.c            | 16 +++++++-
>  target-tricore/helper.c     | 16 +++++++-
>  target-unicore32/helper.c   | 16 +++++++-
>  vl.c                        | 26 ++++++++++++
>  21 files changed, 439 insertions(+), 73 deletions(-)
>  create mode 100644 hw/cpu/core.c
>  create mode 100644 hw/cpu/socket.c
>  create mode 100644 include/hw/cpu/core.h
>  create mode 100644 include/hw/cpu/socket.h
> 


Reply via email to