On Thu, 15 Jan 2004, Oliver Neukum wrote: > > With the new changes, another pattern might work better. In particular, > > there's no need for the private data structure to include an in-use > > semaphore since usbdev->serialize (read-locked) would do just as well. > > That is an obvious layering violation. Drivers should not use a core > semaphore unless they have an unavoidable need (like ifnum conversion)
That's not clear to me. Preventing disconnect() from running _is_ an unavoidable need. And I don't see any significant difference between using usbdev->serialize to delay disconnect from being called and using a driver-internal semaphore to delay disconnect from returning. Both actions will block the core and at approximately the same place. > Holding such a lock should be as short and infrequent as possible. I agree with that. It's part of the reason for suggesting this new design pattern. > > But in fact, the whole idea of using a semaphore to block disconnect() may > > be unnecessary. > > > > There has been a requirement that after a driver's disconnect() routine > > returns, the driver will not try to access the device at all -- all URBs > > must be unlinked and the struct usb_device may be deallocated at any time. > > The core is now (or soon will be) sufficiently well protected that this > > requirement is no longer needed. When disconnect() is called, > > usbdev->state will already have been set to USB_STATE_NOTATTACHED so no > > new URBs will be accepted, and all URBs in flight will have been unlinked. > > That is wrong. Disconnect() need not be triggered by a physical unplugging. > You can unlink all URBs in flight, but you cannot reliably block new > submissions, unless you are willing to do pseudo-RCU. It's true that we cannot _reliably_ block new submissions. But we can block them, by using the new refuse_request mechanism. (It's not reliable because the driver can unblock them and because we wouldn't block endpoint 0 in the absence of a physical unplugging.) Given the proper circumstances, it would be possible for a driver with insufficient internal synchronization to successfully submit an URB after disconnect() returned. So the question becomes, should a driver delay returning from disconnect() until it can be certain that it will not submit any more URBs, or should it instead check to make sure it hasn't disconnected at each place where an URB is submitted? I guess either approach could work; the choice should be left to the driver. Either way, we do still have the requirement of not submitting URBs after disconnect() returns. > > All the driver really needs to do is make sure that the struct usb_device > > isn't deallocated until it has finished cleaning up, and that can be > > handled with usb_get_dev()/usb_put_dev(). > > > > This approach does have the disadvantage of complicating the driver's exit > > It would enormously complicate the softdisconnect in the core. I don't understand what you're referring to. What's the softdisconnect? Alan Stern ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ [EMAIL PROTECTED] To unsubscribe, use the last form field at: https://lists.sourceforge.net/lists/listinfo/linux-usb-devel