On Tue, 1 Mar 2016 12:19:21 +1100
David Gibson <da...@gibson.dropbear.id.au> wrote:

> On Mon, Feb 29, 2016 at 04:42:58PM +0100, Igor Mammedov wrote:
> > On Thu, 25 Feb 2016 14:52:06 -0300
> > Eduardo Habkost <ehabk...@redhat.com> wrote:
> >   
> > > On Wed, Feb 24, 2016 at 03:42:18PM +0100, Igor Mammedov wrote:  
> > > > On Tue, 23 Feb 2016 18:26:20 -0300
> > > > Eduardo Habkost <ehabk...@redhat.com> wrote:
> > > >     
> > > > > On Tue, Feb 23, 2016 at 10:46:45AM +0100, Igor Mammedov wrote:    
> > > > > > On Mon, 22 Feb 2016 13:54:32 +1100
> > > > > > David Gibson <da...@gibson.dropbear.id.au> wrote:      
> > > > > [...]    
> > > > > > > This is why Eduardo suggested - and I agreed - that it's probably
> > > > > > > better to implement the "1st layer" as an internal 
> > > > > > > structure/interface
> > > > > > > only, and implement the 2nd layer on top of that.  When/if we 
> > > > > > > need to
> > > > > > > we can revisit a user-accessible interface to the 1st layer.      
> > > > > > We are going around QOM based CPU introspecting interface for
> > > > > > years now and that's exactly what 2nd layer is, just another
> > > > > > implementation. I've just lost hope in this approach.
> > > > > > 
> > > > > > What I'm suggesting in this RFC is to forget controversial
> > > > > > QOM approach for now and use -device/device_add + QMP 
> > > > > > introspection,      
> > > > > 
> > > > > You have a point about it looking controversial, but I would like
> > > > > to understand why exactly it is controversial. Discussions seem
> > > > > to get stuck every single time we try to do something useful with
> > > > > the QOM tree, and I don't undertsand why.    
> > > > Maybe because we are trying to create a universal solution to fit
> > > > ALL platforms? And every time some one posts patches to show
> > > > implementation, it would break something in existing machine
> > > > or is not complete in terms of how interface would work wrt
> > > > mgmt/CLI/migration.    
> > > 
> > > That's true.
> > >   
> > > >     
> > > > >     
> > > > > > i.e. completely split interface from how boards internally implement
> > > > > > CPU hotplug.      
> > > > > 
> > > > > A QOM-based interface may still split the interface from how
> > > > > boards internally implement CPU hotplug. They don't need to
> > > > > affect the device tree of the machine, we just need to create QOM
> > > > > objects or links at predictable paths, that implement certain
> > > > > interfaces.    
> > > > Beside of not being able to reach consensus for a long time,
> > > > I'm fine with isolated QOM interface if it allow us to move forward.
> > > > However static QMP/QAPI interface seems to be better describing and
> > > > has better documentation vs current very flexible poorly 
> > > > self-describing QOM.    
> > > 
> > > You have a good point: QMP is more stable and better documented.
> > > QOM is easier for making experiments, and I would really like to
> > > see it being used more. But if we still don't understand the
> > > requirements enough to design a QMP interface, we won't be able
> > > to implement the same functionality using QOM either.
> > > 
> > > If we figure out the requirements, I believe we should be able to
> > > design equivalent QMP and QOM interfaces.  
> > So not to stall CPU hotplug progress, I'd start with stable QMP query
> > interface for general use, leaving experimental QOM interface for later
> > as difficult to discover and poorly documented one from mgmt pov,
> > meaning mgmt would have to:
> >  - instantiate a particular machine type to find if QOM interface is 
> > supported,
> >    i.e. '-machine none' won't work with it as it's board depended VS static 
> > compile time qapi-schema in QMP case
> >  - execute a bunch of qom-list/qom-read requests over wire to 
> > enumerate/query
> >    objects starting at some fixed entry point (/machine/cpus) VS a single 
> > command that does 'atomic' enumeration in QMP case.  
> 
> That sounds reasonable to me.
> 
> However, before even that, I think we need to work out exactly what
> device_add of a multi-thread cpu module looks like.  I think that's
> less of a solved problem than everyone seems to be assuming.
S390 seems to be interested only in thread level hotplug:

   device_add thread-type,thread=1

for x86 I see 2 cases, current thread level,
which also likely applies to virt-arm board

   device_add thread-type,[node=N,]socket=X,core=Y,thread=1

and if decide to do x86 hotplug at socket level then an additional variant
for new machine type would be multi-threaded:
 
   device_add socket-type,[node=N,]socket=X

For sPAPR it would be:

  device_add socket-type,core=X
   
For homogeneous CPUs we can continue to use -smp cores,threads options for
describing internal multi-threaded CPU layout. These options could be even
converted to global properties for TYPE_CPU_SOCKET.cores and 
TYPE_CPU_CORE.threads
so that they would be set automatically on all CPU objects.

Heterogeneous CPUs obviously don't fit in -smp world and would require
more/other properties to describe their configuration. Even so board
which provides layout via query-hotpluggable-cpus could supply a list
of options needed for a particular CPU slot. Then management could use
them to hotplug a CPU and might do some options processing if
it makes sense (like thread pinning).

Reply via email to