On Thu, 18 Feb 2016 14:39:52 +1100
David Gibson <da...@gibson.dropbear.id.au> wrote:

> On Tue, Feb 16, 2016 at 11:36:55AM +0100, Igor Mammedov wrote:
> > On Mon, 15 Feb 2016 20:43:41 +0100
> > Markus Armbruster <arm...@redhat.com> wrote:
> >   
> > > Igor Mammedov <imamm...@redhat.com> writes:
> > >   
> > > > it will allow mgmt to query present and possible to hotplug CPUs
> > > > it is required from a target platform that wish to support
> > > > command to set board specific MachineClass.possible_cpus() hook,
> > > > which will return a list of possible CPUs with options
> > > > that would be needed for hotplugging possible CPUs.
> > > >
> > > > For RFC there are:
> > > >    'arch_id': 'int' - mandatory unique CPU number,
> > > >                       for x86 it's APIC ID for ARM it's MPIDR
> > > >    'type': 'str' - CPU object type for usage with device_add
> > > >
> > > > and a set of optional fields that would allows mgmt tools
> > > > to know at what granularity and where a new CPU could be
> > > > hotplugged;
> > > > [node],[socket],[core],[thread]
> > > > Hopefully that should cover needs for CPU hotplug porposes for
> > > > magor targets and we can extend structure in future adding
> > > > more fields if it will be needed.
> > > >
> > > > also for present CPUs there is a 'cpu_link' field which
> > > > would allow mgmt inspect whatever object/abstraction
> > > > the target platform considers as CPU object.
> > > >
> > > > For RFC purposes implements only for x86 target so far.    
> > > 
> > > Adding ad hoc queries as we go won't scale.  Could this be solved by a
> > > generic introspection interface?  
> > Do you mean generic QOM introspection?
> > 
> > Using QOM we could have '/cpus' container and create QOM links
> > for exiting (populated links) and possible (empty links) CPUs.
> > However in that case link's name will need have a special format
> > that will convey an information necessary for mgmt to hotplug
> > a CPU object, at least:
> >   - where: [node],[socket],[core],[thread] options
> >   - optionally what CPU object to use with device_add command  
> 
> Hmm.. is it not enough to follow the link and get the topology
> information by examining the target?
One can't follow a link if it's an empty one, hence
CPU placement information should be provided somehow,
either:
 * by precreating cpu-package objects with properties that
   would describe it /could be inspected via OQM/
or
 * via QMP/HMP command that would provide the same information
   only without need to precreate anything. The only difference
   is that it allows to use -device/device_add for new CPUs.

Considering that we would need to create HMP command so user could
inspect possible CPUs from monitor, it would need to do the same as
QMP command regardless of whether it's cpu-package objects or
just board calculated info a runtime.
 
> In the design Eduardo and I have been discussing we're actually not
> planning to allow device_add to construct CPU packages - at least, not
> for the time being.  The idea is that the machine type will construct
> enough packages for maxcpus, and management just toggles them on and
> off.
Another question is how it would work wrt migration?

> We can eventually allow construction of new packages with device_add,
> but for now that gets hidden inside the platform until we've worked
> out more details.
> 
> > Another approach to do QOM introspection would be to model hierarchy 
> > of objects like node/socket/core..., That's what Andreas
> > worked on. Only it still suffers the same issue as above
> > wrt introspection and hotplug, One can pre-create empty
> > [nodes][sockets[cores]] containers at startup but then
> > leaf nodes that could be hotplugged would be a links anyway
> > and then again we need to give them special formatted names
> > (not well documented at that mgmt could make sense of).
> > That hierarchy would need to become stable ABI once
> > mgmt will start using it and QOM tree is quite unstable
> > now for that. For some targets it involves creating dummy
> > containers like node/socket/core for x86 where just modeling
> > a thread is sufficient.  
> 
> I'd prefer to avoid exposing the node/socket/core heirarchy through
> the QOM interfaces as much as possible.  Although all systems I know
> of have a heirarchy something like that, exactly what the levels may
> vary, so I think it's better not to bake that into our interface.
> 
> Properties giving core/socket/node id values isn't too bad, but
> building a whole tree mirroring that heirarchy seems like asking for
> trouble.
It's ok to have flat array of cpu-packages as well, only that
they should provide mgmt with information that would say where
CPU is could be plugged (meaning: node/socket/core/thread 
and/or some other properties, I guess it's target dependent thing)
so that user could select where CPU goes and do other actions
after plugging it, like pinning VCPU threads to a correct host
node/cpu.

> 
> > The similar but a bit more abstract approach was suggested
> > by David https://lists.gnu.org/archive/html/qemu-ppc/2016-02/msg00000.html
> > 
> > Benefit of dedicated CPU hotplug focused QMP command is that
> > it can be quite abstract to suite most targets and not depend
> > on how a target models CPUs internally and still provide
> > information needed for hotplugging a CPU object.
> > That way we can split efforts on how we model/refactor CPUs
> > internally and how mgmt would work with them using
> > -device/device_add.
> >   
> 


Reply via email to