On Mon, 09 Dec 2013 16:16:57 +0100
Paolo Bonzini <pbonz...@redhat.com> wrote:

> Il 09/12/2013 16:08, Igor Mammedov ha scritto:
> > On Mon, 09 Dec 2013 15:36:35 +0100
> > Paolo Bonzini <pbonz...@redhat.com> wrote:
> > 
> >> Il 09/12/2013 15:14, Igor Mammedov ha scritto:
> >>>>>
> >>>>> Makes sense.  realize() for the "plug" handler, and qdev_unplug for the
> >>>>> unplug handler, I guess.
> >>> Just to be sure, I've meant not DEVICE.realize() but each device specific
> >>> one.
> >>
> >> If it's each specific device, then why should the hotplug handler link
> >> be in DeviceState?
> > The reason I've put it there is to eventually replace allow_hotplug field 
> > with it,
> > it also reduces code duplication (i.e. we wont' have to add it in PCIDevice,
> > DimmDevice ...) and potentially allows to use NULL for error in
> > property lookup since each bus will have it.
> 
> I agree that's the right thing to do.
> 
> >> I think it should be in device_set_realized.
> > if we dub it nofail then it's fine, otherwise failure path becomes more 
> > complicated.
> > 
> > Calling handler in specific device realize() allows to gracefully abort
> > realize() since that device knows what needs to be done to do so:
> > For example:
> >  @@ -1720,6 +1714,8 @@ static int pci_qdev_init(DeviceState *qdev)
> >  ...
> > +            hdc->hotplug(hotplug_dev, qdev, &local_err);
> > +            if (error_is_set(&local_err)) {
> > +                int r = pci_unregister_device(&pci_dev->qdev);
> > 
> > Calling handler in realize will not allow to do it.
> > It's just much more flexible to call handler from specific device since it 
> > knows
> > when it's the best to call handler and how to deal with failure.
> 
> We could have separate check/plug methods.  Only check can fail, it must
> be idempotent, and it can be invoked while the device is still unrealized.
Reasons I've stated before apply to 'check' as well, for only specific device 
knows
when it's fine to call it. That would just replace "plug" with "check" in 
realize()
for not much of benefit.

> 
> The reason I liked the interface, is because it removes the need for
> each bus to add its own plug/unplug handling.
       ^^^
Have you meant device instead of bus?
In PCI code it's a PCIDevice who calls hotplug handling, PCI BUS serves only as
a source for plug/upnlug handlers and hotplug_qdev.

This series moves hotplug_qdev to BusState so it could be reused by other buses
and removes from bus not belonging to it plug/unplug callbacks.

It's still improvement over current PCI hotplug code and allows to simplify
other buses as well by removing callbacks from them.

The only way to call callbacks from DEVICE.realize()/unplug(), I see, is if we 
make
them all nofail, then it would be safe to call them in "realize" property 
setter.
But we have at least 2 callbacks that can fail:
 pcie_cap_slot_hotplug() and shpc_device_hotplug()
maybe more. I'm not an expert on both of them to tell for sure if they could
be made nofail. We can try do it, but it could perfectly apply on top
of this series and could be made later.

Goal of this series was to add and demonstrate reusable hotplug interface as
opposed to PCI specific or SCSI-bus specific ones, so it could be used for 
memory
hotplug as well. It might not do everything we would like but it looks like a 
move
the right direction.
If it's wrong direction, I could drop the idea and fallback to an original
less intrusive approach, taking in account your comment to move type definitions
into separate header.

> 
> Paolo
> 
> >>> qdev_unplug() might work for now, but I haven't checked all devices that
> >>> might use interface and if it would break anything. Ideally it should be
> >>> in device's unrealize() complementing realize() part.
> >>>
> >>> I'd wait till all buses converted to new interface before attempting to
> >>> generalize current plug/unplug call pathes though.
> >>
> >> I agree that adding a default behavior for no link probably requires
> >> conversion of all buses.  However, looking for the link in the generic
> >> code can be done now.
> >>
> >> Paolo
> >>
> > 
> 


Reply via email to