This would be rare with HCDs that maintain chains of DMA
transfers, except if the HC dies in the middle of an I/O
request; so no rush to merge this. It'd happen in a PIO
based HCD though ... :)
- Dave
Async unlink of an URB from an endpoint's I/O queue _normally_ involves a
delay from handshaking with the host controller, to be sure the DMA queue
is inactive. So urb->complete() runs after usb_unlink_urb() returns, and
from a different context. But not always...
The completion may run immediately whenever the HCD knows that HC isn't
busy with the URB. Maybe that HCD is in a HALT state, or the endpoint
queue is is temporarily off-schedule (halted, or dead after PM resume
from D3cold, etc) ... or maybe the HCD doesn't use DMA, so most unlinks
just list_del_init() and return.
This makes usb_sg_cancel() and sg_complete() drop the io->lock when they
cancel active urbs, preventing potential self-deadlock when that completion
handler runs immediately.
Signed-off-by: David Brownell <[EMAIL PROTECTED]>
--- 1.70/drivers/usb/core/message.c Thu Nov 4 18:38:10 2004
+++ edited/drivers/usb/core/message.c Mon Nov 22 09:04:54 2004
@@ -243,14 +243,16 @@
// BUG ();
}
- if (urb->status && urb->status != -ECONNRESET) {
+ if (io->status == 0 && urb->status && urb->status != -ECONNRESET) {
int i, found, status;
io->status = urb->status;
/* the previous urbs, and this one, completed already.
* unlink pending urbs so they won't rx/tx bad data.
+ * careful: unlink can sometimes be synchronous...
*/
+ spin_unlock (&io->lock);
for (i = 0, found = 0; i < io->entries; i++) {
if (!io->urbs [i] || !io->urbs [i]->dev)
continue;
@@ -263,6 +265,7 @@
} else if (urb == io->urbs [i])
found = 1;
}
+ spin_lock (&io->lock);
}
urb->dev = NULL;
@@ -524,6 +527,7 @@
int i;
io->status = -ECONNRESET;
+ spin_unlock (&io->lock);
for (i = 0; i < io->entries; i++) {
int retval;
@@ -534,6 +538,7 @@
dev_warn (&io->dev->dev, "%s, unlink --> %d\n",
__FUNCTION__, retval);
}
+ spin_lock (&io->lock);
}
spin_unlock_irqrestore (&io->lock, flags);
}