On Thu, 19 Jan 2017 09:45:11 +0000
"Daniel P. Berrange" <berra...@redhat.com> wrote:

> On Wed, Jan 18, 2017 at 06:13:16PM +0100, Igor Mammedov wrote:
> > 
> > Series introduces a new CLI option to allow mapping cpus to numa
> > nodes using public properties [socket|core|thread]-ids instead of
> > internal cpu-index and moving cpu<->node mapping from global bitmaps
> > to PCMachineState struct.  
> 
> What is the benefit of this change to apps ? Obviously libvirt uses
> the current syntax, but I'm not aware of what problems that has - why
> would libvirt want to use this new syntax instead ?
current syntax -numa cpus=1,2,3... depends on cpu-index which is
internal to QEMU. External users wouldn't actually know which cpu
is associated with which cpu-index without re-implementing cpu-index
assignment which is qemu-version/target/machine/topology dependent.

New '-numa cpu' provides mapping of cpus to numa nodes in
CPU terms that are used with device_add/-device commands.
For management there is query-hotpluggble-cpus command that allows
to get a list of possible cpus with their socket-id/core-id/thread-id
property values.
for example without numa mapping CLI could look like:
  $QEMU -M pc -smp 1,sockets=3,maxcpus=3 \
      -device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0

(qemu) info hotpluggable-cpus 
Hotpluggable CPUs:
  type: "qemu64-x86_64-cpu"
  vcpus_count: "1"
  CPUInstance Properties:
    socket-id: "2"
    core-id: "0"
    thread-id: "0"
  type: "qemu64-x86_64-cpu"
  vcpus_count: "1"
  qom_path: "/machine/peripheral-anon/device[0]"
  CPUInstance Properties:
    socket-id: "1"
    core-id: "0"
    thread-id: "0"
  type: "qemu64-x86_64-cpu"
  vcpus_count: "1"
  qom_path: "/machine/unattached/device[0]"
  CPUInstance Properties:
    socket-id: "0"
    core-id: "0"
    thread-id: "0"


based on that list the one could extend CLI with numa mapping:

  $QEMU -M pc -smp 1,sockets=3,maxcpus=3 \
      -device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0 \
      -numa cpu,socket-id=0,core-id=0,thread-id=0,node-id=0 \
      -numa cpu,socket-id=1,core-id=0,thread-id=0,node-id=1 \
      -numa cpu,socket-id=2,core-id=0,thread-id=0,node-id=2 

(qemu) info hotpluggable-cpus 
Hotpluggable CPUs:
  type: "qemu64-x86_64-cpu"
  vcpus_count: "1"
  CPUInstance Properties:
    node-id: "2"
    socket-id: "2"
    core-id: "0"
    thread-id: "0"
  type: "qemu64-x86_64-cpu"
  vcpus_count: "1"
  qom_path: "/machine/peripheral-anon/device[0]"
  CPUInstance Properties:
    node-id: "1"
    socket-id: "1"
    core-id: "0"
    thread-id: "0"
  type: "qemu64-x86_64-cpu"
  vcpus_count: "1"
  qom_path: "/machine/unattached/device[0]"
  CPUInstance Properties:
    node-id: "0"
    socket-id: "0"
    core-id: "0"
    thread-id: "0"

As it's been discussed previous times, there is chicken/egg
issue where management has to get query-hotpluggble-cpus result
for a specific combination of machine type and -smp option.
It would need to be done at most one time when creating
a configuration and then CLI numa mapping could be used
for starting VM without doing query-hotpluggable-cpus.
'-numa cpu' CLI option is also needed for setting known
mapping on target side of migration.

Previous consensus has been that the only way to avoid 2 stage
discovery/configuration is starting QEMU in paused mode and
than do query + mapping at runtime via QMP before allowing
guest run with 'continue' command.
But this part hasn't been implemented in this series yet.

> 
> 
> Regards,
> Daniel


Reply via email to