Here's another UHCI patch, that's a bit less involved than
what Dan posted yesterday ... it only tries to make the
interrupt code handle queueing. (Except for doc updates
and a bit of gratuitous code shrinkage.) The goal being
to get to the point, quickly, where we can get rid of
the interrupt automagic.
I ordinarily wouldn't post it because it's got at least two
problems that I know about (interrupt logic only; everything
else is unaffected), and has debug code, but since Dan posted
his, now would seem to be the time.
Here it is, with some compare/contrast to Dan's patch
- Updates some of the internal doc.
- Removes the unused skel_int_NN #defines.
- "int_qh" repleases "skeltd". Dan also merged
these into "skelqh", which I avoided since it
was more invasive.
This meant changing the init and cleanup logic
for these.
- removes the functions that were specific to the
previous interrupt-only implementation.
- uhci_alloc_common() allocates non-iso td queues
for DATA stages, allocates the urb's QH. This
was previously the core of the bulk submit logic,
so interrupt now gets maxerr=3.
I imagine this could also be used for control
requests, without ending up nasty (caller would
provide setup and status stages); didn't notice
how Dan did this.
Noted an issue with ENOMEM exits trashing toggle.
- uhci_submit_common() replaces submit_{bulk,interrupt}()
since they were almost identical at that point.
This might not be ideal if interrupt scheduling ever
moves to a tree model to maximize bandwidth usage;
but it doesn't do that now, and this is simpler.
- Corresponding cleanups of the submit paths that
handle bulk or interrupt urbs. And a minor fix
of the iso submit path (bandwidth is always zero).
- Renamed usb_result_{bulk,interrupt}() -- same
function, with a #define -- as usb_result_common().
- Whacked the finish_urb path to resubmit interrupt
urbs (this will get modified with automagic gone).
- Streamlined the uhci_start() logic a bit; the links
can be set up in the loops.
Hey, that code at the end of uhci_start() is all but
a copy release_uhci() does!! So call that instead.
- Needed to modify the code that dumps schedules.
(Which IMO is extremely healthy to keep around.)
The two bugs may be related, but I'd barely got past
basic "did I break non-interrupt code" testing before
I found out about Dan's patch.
- The interrupt urbs don't appear in the schedule dump.
But they do appear to work well enough to enumerate
ond device through a hub.
- Didn't enumerate a second device on that hub ... :)
So without further ado, here's what I got working (that
far) yesterday morning. (The RFC is of course "what
do we do next".)
- Dave
--- ./drivers-dist/usb/host/uhci-hcd.h Sun Sep 15 19:57:44 2002
+++ ./drivers/usb/host/uhci-hcd.h Fri Oct 11 10:50:27 2002
@@ -196,20 +196,22 @@
/*
* There are various standard queues. We set up several different
- * queues for each of the three basic queue types: interrupt,
+ * queue skeletons for each of three transfer types: interrupt,
* control, and bulk.
*
- * - There are various different interrupt latencies: ranging from
- * every other USB frame (2 ms apart) to every 256 USB frames (ie
- * 256 ms apart). Make your choice according to how obnoxious you
- * want to be on the wire, vs how critical latency is for you.
- * - The control list is done every frame.
- * - There are 4 bulk lists, so that up to four devices can have a
- * bulk list of their own and when run concurrently all four lists
- * will be be serviced.
+ * - There are various different interrupt periods, ranging from
+ * every USB frame (1 ms apart) to every 128 USB frames (128ms apart).
+ * Devices choose their intervals according to how obnoxious they
+ * want to be on the wire, vs how critical latency is.
+ * - Control requests can be done every frame; we run low speed at
+ * one packet per frame to help prevent starvation
+ * - Bulk requests can be done every frame too, if there's enough
+ * bandwidth remaining after everything else.
*
* This is a bit misleading, there are various interrupt latencies, but they
- * vary a bit, interrupt2 isn't exactly 2ms, it can vary up to 4ms since the
- * other queues can "override" it. interrupt4 can vary up to 8ms, etc. Minor
- * problem
+ * get put into binary slots: 3 msec intervals use the 2msec slot, and
+ * anything over 128 msec goes into the 128msec slot. Plus, getting actual
+ * 1msec interrupt latencies requires drivers to queue interrupt transfers
+ * (double buffering), since we can't usually process completions from one
+ * frame before the HC starts interrupt transfers in the next one.
*
* In the case of the root hub, these QH's are just head's of qh's. Don't
@@ -253,15 +255,6 @@
*/
-#define UHCI_NUM_SKELTD 10
-#define skel_int1_td skeltd[0]
-#define skel_int2_td skeltd[1]
-#define skel_int4_td skeltd[2]
-#define skel_int8_td skeltd[3]
-#define skel_int16_td skeltd[4]
-#define skel_int32_td skeltd[5]
-#define skel_int64_td skeltd[6]
-#define skel_int128_td skeltd[7]
-#define skel_int256_td skeltd[8]
-#define skel_term_td skeltd[9] /* To work around PIIX UHCI bug */
+#define UHCI_NUM_INTQH 8
+#define skel_int1_qh int_qh[0] /* in every frame! */
#define UHCI_NUM_SKELQH 4
@@ -280,10 +273,6 @@
*
* For a given <interval>, this function returns the appropriate/matching
- * skelqh[] index value.
- *
- * NOTE: For UHCI, we don't really need int256_qh since the maximum interval
- * is 255 ms. However, we do need an int1_qh since 1 is a valid interval
- * and we should meet that frequency when requested to do so.
- * This will require some change(s) to the UHCI skeleton.
+ * skelqh[] index value. For full and low speed interrupts, we know this
+ * will never be more than floor(log2(255)) == 7.
*/
static inline int __interval_to_skel(int interval)
@@ -333,5 +322,10 @@
struct usb_bus *bus;
- struct uhci_td *skeltd[UHCI_NUM_SKELTD]; /* Skeleton TD's */
+ /* skel_term_td is used to work around a "will not fix"
+ * PIIX UHCI erratum (publicly documented).
+ */
+ struct uhci_td *skel_term_td;
+
+ struct uhci_qh *int_qh[UHCI_NUM_INTQH]; /* INT Skeleton QHs */
struct uhci_qh *skelqh[UHCI_NUM_SKELQH]; /* Skeleton QH's */
--- ./drivers-dist/usb/host/uhci-hcd.c Mon Oct 7 18:19:12 2002
+++ ./drivers/usb/host/uhci-hcd.c Fri Oct 11 13:37:50 2002
@@ -175,22 +175,4 @@
}
-static void uhci_insert_td(struct uhci_hcd *uhci, struct uhci_td *skeltd, struct
uhci_td *td)
-{
- unsigned long flags;
- struct uhci_td *ltd;
-
- spin_lock_irqsave(&uhci->frame_list_lock, flags);
-
- ltd = list_entry(skeltd->fl_list.prev, struct uhci_td, fl_list);
-
- td->link = ltd->link;
- mb();
- ltd->link = cpu_to_le32(td->dma_handle);
-
- list_add_tail(&td->fl_list, &skeltd->fl_list);
-
- spin_unlock_irqrestore(&uhci->frame_list_lock, flags);
-}
-
/*
* We insert Isochronous transfers directly into the frame list at the
@@ -372,4 +354,6 @@
struct uhci_qh *lqh;
+dbg ("append qh %p using skel %p", urbp->qh, skelqh);
+
/* Grab the last QH */
lqh = list_entry(skelqh->list.prev, struct uhci_qh, list);
@@ -384,4 +368,5 @@
turbp->qh->link = lqh->link;
+dbg ("patch prev qh %p", turbp->qh);
}
urbp->qh->link = lqh->link;
@@ -409,4 +394,5 @@
turbp->qh->link = cpu_to_le32(urbp->qh->dma_handle) |
UHCI_PTR_QH;
+dbg ("patch queued qh %p", turbp->qh);
}
}
@@ -503,5 +489,5 @@
/* This function will append one URB's QH to another URB's QH. This is for */
-/* queuing bulk transfers and soon implicitily for */
+/* queuing bulk or interrupt transfers, and soon implicitily for */
/* control transfers */
static void uhci_append_queued_urb(struct uhci_hcd *uhci, struct urb *eurb, struct
urb *urb)
@@ -785,4 +771,106 @@
/*
+ * build the td queue used for the data stage of a non-iso request,
+ * and allocate its qh.
+ */
+static struct uhci_qh *
+uhci_alloc_common (struct uhci_hcd *uhci, struct urb *urb)
+{
+ struct uhci_td *td;
+ struct uhci_qh *qh;
+ unsigned long destination, status;
+ struct urb_priv *urbp;
+ int maxsze = usb_maxpacket(urb->dev, urb->pipe,
+usb_pipeout(urb->pipe));
+ int len = urb->transfer_buffer_length;
+ dma_addr_t data = urb->transfer_dma;
+
+ /* The "pipe" thing contains the destination in bits 8--18;
+ * usb.h assigned those bits knowing UHCI might rely on them.
+ */
+ destination = (urb->pipe & PIPE_DEVEP_MASK) | usb_packetid(urb->pipe);
+
+ /* 3 errors */
+ status = TD_CTRL_ACTIVE | uhci_maxerr(3);
+ if (!(urb->transfer_flags & URB_SHORT_NOT_OK))
+ status |= TD_CTRL_SPD;
+
+ /* control and interrupt transfers can be low speed */
+ if (urb->dev->speed == USB_SPEED_LOW)
+ status |= TD_CTRL_LS;
+
+ /* FIXME make this handle control requests too. caller will have
+ * queued SETUP already, but initial toggle for control is always
+ * data1 and the status ack packet needs special casing.
+ */
+
+ /* build the DATA TD's */
+ do { /* Allow zero length packets */
+ int pktsze = len;
+
+ if (pktsze > maxsze)
+ pktsze = maxsze;
+
+ td = uhci_alloc_td(uhci, urb->dev);
+ if (!td)
+ return 0;
+
+ uhci_add_td_to_urb(urb, td);
+ uhci_fill_td(td, status, destination | uhci_explen(pktsze - 1) |
+ (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe),
+ usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE_SHIFT),
+ data);
+
+ data += pktsze;
+ len -= maxsze;
+
+ /* FIXME if we return after ENOMEM, the saved data
+ * toggle for bulk/interrupt EPs can be wrong.
+ */
+ usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe),
+ usb_pipeout(urb->pipe));
+ } while (len > 0);
+
+ /*
+ * USB_ZERO_PACKET means adding a 0-length packet, if
+ * direction is OUT and the transfer_length was an
+ * exact multiple of maxsze, hence
+ * (len = transfer_length - N * maxsze) == 0
+ * however, if transfer_length == 0, the zero packet
+ * was already prepared above.
+ */
+ if (usb_pipeout(urb->pipe) && (urb->transfer_flags & USB_ZERO_PACKET) &&
+ !len && urb->transfer_buffer_length) {
+ td = uhci_alloc_td(uhci, urb->dev);
+ if (!td)
+ return 0;
+
+ uhci_add_td_to_urb(urb, td);
+ uhci_fill_td(td, status, destination |
+uhci_explen(UHCI_NULL_DATA_SIZE) |
+ (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe),
+ usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE_SHIFT),
+ data);
+
+ usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe),
+ usb_pipeout(urb->pipe));
+ }
+
+ /* Request an IRQ only for the very last packet. we ignore the
+ * URB_NO_INTERRUPT hint since for UHCI, recycling td memory is
+ * usually more important than eliminating IRQs.
+ */
+ td->status |= cpu_to_le32(TD_CTRL_IOC);
+
+ if ((qh = uhci_alloc_qh(uhci, urb->dev)) == 0)
+ return 0;
+
+ urbp = (struct urb_priv *)urb->hcpriv;
+ urbp->qh = qh;
+ qh->urbp = urbp;
+
+ return qh;
+}
+
+
+/*
* Control transfers
*/
@@ -1047,40 +1135,39 @@
}
+
/*
- * Interrupt transfers
- */
-static int uhci_submit_interrupt(struct uhci_hcd *uhci, struct urb *urb)
+ * Bulk and interrupt transfers are identical except that:
+ * - only bulk may use more than one packet per frame (breadth first);
+ * - interrupt transfers can run out of reserved periodic bandwidth;
+ * - low speed bulk is illegal.
+ * The caller checked the last two, we handle the first.
+ */
+static int
+uhci_submit_common (
+ struct uhci_hcd *uhci,
+ struct urb *urb,
+ struct urb *eurb,
+ struct uhci_qh *skel
+)
{
- struct uhci_td *td;
- unsigned long destination, status;
-
- if (urb->transfer_buffer_length > usb_maxpacket(urb->dev, urb->pipe,
usb_pipeout(urb->pipe)))
- return -EINVAL;
-
- /* The "pipe" thing contains the destination in bits 8--18 */
- destination = (urb->pipe & PIPE_DEVEP_MASK) | usb_packetid(urb->pipe);
-
- status = TD_CTRL_ACTIVE | TD_CTRL_IOC;
- if (urb->dev->speed == USB_SPEED_LOW)
- status |= TD_CTRL_LS;
+ struct uhci_qh *qh;
- td = uhci_alloc_td(uhci, urb->dev);
- if (!td)
+ if (!(qh = uhci_alloc_common (uhci, urb)))
return -ENOMEM;
- destination |= (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe),
usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE_SHIFT);
- destination |= uhci_explen(urb->transfer_buffer_length - 1);
-
- usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe), usb_pipeout(urb->pipe));
-
- uhci_add_td_to_urb(urb, td);
- uhci_fill_td(td, status, destination, urb->transfer_dma);
+ uhci_insert_tds_in_qh(qh, urb,
+ usb_pipeint (urb->pipe)
+ ? UHCI_PTR_DEPTH
+ : UHCI_PTR_BREADTH);
- uhci_insert_td(uhci, uhci->skeltd[__interval_to_skel(urb->interval)], td);
+ if (eurb)
+ uhci_append_queued_urb(uhci, eurb, urb);
+ else
+ uhci_insert_qh(uhci, skel, urb);
return -EINPROGRESS;
}
-static int uhci_result_interrupt(struct uhci_hcd *uhci, struct urb *urb)
+static int uhci_result_common (struct uhci_hcd *uhci, struct urb *urb)
{
struct list_head *tmp, *head;
@@ -1129,5 +1216,5 @@
if ((debug == 1 && ret != -EPIPE) || debug > 1) {
/* Some debugging code */
- dbg("uhci_result_interrupt/bulk() failed with status %x",
+ dbg("uhci_result_common() failed with status %x",
status);
@@ -1146,128 +1233,4 @@
}
-static void uhci_reset_interrupt(struct uhci_hcd *uhci, struct urb *urb)
-{
- struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
- struct uhci_td *td;
- unsigned long flags;
-
- spin_lock_irqsave(&urb->lock, flags);
-
- td = list_entry(urbp->td_list.next, struct uhci_td, list);
-
- td->status = (td->status & cpu_to_le32(0x2F000000)) |
cpu_to_le32(TD_CTRL_ACTIVE | TD_CTRL_IOC);
- td->token &= ~cpu_to_le32(TD_TOKEN_TOGGLE);
- td->token |= cpu_to_le32(usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe),
usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE_SHIFT);
- usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe), usb_pipeout(urb->pipe));
-
- urb->status = -EINPROGRESS;
-
- spin_unlock_irqrestore(&urb->lock, flags);
-}
-
-/*
- * Bulk transfers
- */
-static int uhci_submit_bulk(struct uhci_hcd *uhci, struct urb *urb, struct urb *eurb)
-{
- struct uhci_td *td;
- struct uhci_qh *qh;
- unsigned long destination, status;
- int maxsze = usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe));
- int len = urb->transfer_buffer_length;
- struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
- dma_addr_t data = urb->transfer_dma;
-
- if (len < 0)
- return -EINVAL;
-
- /* Can't have low speed bulk transfers */
- if (urb->dev->speed == USB_SPEED_LOW)
- return -EINVAL;
-
- /* The "pipe" thing contains the destination in bits 8--18 */
- destination = (urb->pipe & PIPE_DEVEP_MASK) | usb_packetid(urb->pipe);
-
- /* 3 errors */
- status = TD_CTRL_ACTIVE | uhci_maxerr(3);
- if (!(urb->transfer_flags & URB_SHORT_NOT_OK))
- status |= TD_CTRL_SPD;
-
- /*
- * Build the DATA TD's
- */
- do { /* Allow zero length packets */
- int pktsze = len;
-
- if (pktsze > maxsze)
- pktsze = maxsze;
-
- td = uhci_alloc_td(uhci, urb->dev);
- if (!td)
- return -ENOMEM;
-
- uhci_add_td_to_urb(urb, td);
- uhci_fill_td(td, status, destination | uhci_explen(pktsze - 1) |
- (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe),
- usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE_SHIFT),
- data);
-
- data += pktsze;
- len -= maxsze;
-
- usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe),
- usb_pipeout(urb->pipe));
- } while (len > 0);
-
- /*
- * USB_ZERO_PACKET means adding a 0-length packet, if
- * direction is OUT and the transfer_length was an
- * exact multiple of maxsze, hence
- * (len = transfer_length - N * maxsze) == 0
- * however, if transfer_length == 0, the zero packet
- * was already prepared above.
- */
- if (usb_pipeout(urb->pipe) && (urb->transfer_flags & USB_ZERO_PACKET) &&
- !len && urb->transfer_buffer_length) {
- td = uhci_alloc_td(uhci, urb->dev);
- if (!td)
- return -ENOMEM;
-
- uhci_add_td_to_urb(urb, td);
- uhci_fill_td(td, status, destination |
uhci_explen(UHCI_NULL_DATA_SIZE) |
- (usb_gettoggle(urb->dev, usb_pipeendpoint(urb->pipe),
- usb_pipeout(urb->pipe)) << TD_TOKEN_TOGGLE_SHIFT),
- data);
-
- usb_dotoggle(urb->dev, usb_pipeendpoint(urb->pipe),
- usb_pipeout(urb->pipe));
- }
-
- /* Set the flag on the last packet */
- td->status |= cpu_to_le32(TD_CTRL_IOC);
-
- qh = uhci_alloc_qh(uhci, urb->dev);
- if (!qh)
- return -ENOMEM;
-
- urbp->qh = qh;
- qh->urbp = urbp;
-
- /* Always breadth first */
- uhci_insert_tds_in_qh(qh, urb, UHCI_PTR_BREADTH);
-
- if (eurb)
- uhci_append_queued_urb(uhci, eurb, urb);
- else
- uhci_insert_qh(uhci, uhci->skel_bulk_qh, urb);
-
- uhci_inc_fsbr(uhci, urb);
-
- return -EINPROGRESS;
-}
-
-/* We can use the result interrupt since they're identical */
-#define uhci_result_bulk uhci_result_interrupt
-
/*
* Isochronous transfers
@@ -1447,4 +1410,7 @@
int bustime;
+ if (urb->transfer_buffer_length < 0)
+ return -EINVAL;
+
spin_lock_irqsave(&uhci->urb_list_lock, flags);
@@ -1464,34 +1430,36 @@
break;
case PIPE_INTERRUPT:
- if (eurb)
- ret = -ENXIO; /* no interrupt queueing yet */
- else if (urb->bandwidth == 0) { /* not yet checked/allocated */
- bustime = usb_check_bandwidth(urb->dev, urb);
- if (bustime < 0)
- ret = bustime;
- else {
- ret = uhci_submit_interrupt(uhci, urb);
- if (ret == -EINPROGRESS)
- usb_claim_bandwidth(urb->dev, urb, bustime, 0);
- }
- } else /* bandwidth is already set */
- ret = uhci_submit_interrupt(uhci, urb);
+ bustime = usb_check_bandwidth(urb->dev, urb);
+ if (bustime < 0)
+ ret = bustime;
+ else {
+ int index = __interval_to_skel(urb->interval);
+
+ ret = uhci_submit_common(uhci, urb, eurb,
+ uhci->int_qh [index]);
+ if (ret == -EINPROGRESS)
+ usb_claim_bandwidth(urb->dev, urb, bustime, 0);
+ }
+dbg ("submit intr --> %d, eurb %p", ret, eurb);
break;
case PIPE_BULK:
- ret = uhci_submit_bulk(uhci, urb, eurb);
+ /* Can't have low speed bulk transfers */
+ if (urb->dev->speed == USB_SPEED_LOW)
+ return -EINVAL;
+
+ ret = uhci_submit_common(uhci, urb, eurb, uhci->skel_bulk_qh);
+
+ if (ret == -EINPROGRESS)
+ uhci_inc_fsbr(uhci, urb);
break;
case PIPE_ISOCHRONOUS:
- if (urb->bandwidth == 0) { /* not yet checked/allocated */
- bustime = usb_check_bandwidth(urb->dev, urb);
- if (bustime < 0) {
- ret = bustime;
- break;
- }
-
+ bustime = usb_check_bandwidth(urb->dev, urb);
+ if (bustime < 0)
+ ret = bustime;
+ else {
ret = uhci_submit_isochronous(uhci, urb);
if (ret == -EINPROGRESS)
usb_claim_bandwidth(urb->dev, urb, bustime, 1);
- } else /* bandwidth is already set */
- ret = uhci_submit_isochronous(uhci, urb);
+ }
break;
}
@@ -1530,9 +1498,8 @@
ret = uhci_result_control(uhci, urb);
break;
- case PIPE_INTERRUPT:
- ret = uhci_result_interrupt(uhci, urb);
- break;
- case PIPE_BULK:
- ret = uhci_result_bulk(uhci, urb);
+ // case PIPE_INTERRUPT:
+ // case PIPE_BULK:
+ default:
+ ret = uhci_result_common (uhci, urb);
break;
case PIPE_ISOCHRONOUS:
@@ -1546,34 +1513,13 @@
goto out;
- switch (usb_pipetype(urb->pipe)) {
- case PIPE_CONTROL:
- case PIPE_BULK:
- case PIPE_ISOCHRONOUS:
- /* Release bandwidth for Interrupt or Isoc. transfers */
- /* Spinlock needed ? */
- if (urb->bandwidth)
- usb_release_bandwidth(urb->dev, urb, 1);
- uhci_unlink_generic(uhci, urb);
- break;
- case PIPE_INTERRUPT:
- /* Interrupts are an exception */
- if (urb->interval)
- goto out_complete;
-
- /* Release bandwidth for Interrupt or Isoc. transfers */
- /* Spinlock needed ? */
- if (urb->bandwidth)
- usb_release_bandwidth(urb->dev, urb, 0);
- uhci_unlink_generic(uhci, urb);
- break;
- default:
- info("uhci_transfer_result: unknown pipe type %d for urb %p\n",
- usb_pipetype(urb->pipe), urb);
- }
+ /* Release bandwidth for periodic transfers */
+ /* Spinlock needed ? */
+ if (urb->bandwidth)
+ usb_release_bandwidth(urb->dev, urb, 1);
+ uhci_unlink_generic(uhci, urb);
/* Remove it from uhci->urb_list */
list_del_init(&urbp->urb_list);
-out_complete:
uhci_add_complete(uhci, urb);
@@ -1803,18 +1749,23 @@
static void uhci_finish_urb(struct usb_hcd *hcd, struct urb *urb)
{
- struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
- struct usb_device *dev = urb->dev;
- struct uhci_hcd *uhci = hcd_to_uhci(hcd);
- int killed, resubmit_interrupt, status;
- unsigned long flags;
+ struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv;
+ struct usb_device *dev = urb->dev;
+ struct uhci_hcd *uhci = hcd_to_uhci(hcd);
+ int killed, resubmit_interrupt, status;
+ unsigned long flags;
spin_lock_irqsave(&urb->lock, flags);
+ status = urbp->status;
- killed = (urb->status == -ENOENT || urb->status == -ECONNRESET);
- resubmit_interrupt = (usb_pipetype(urb->pipe) == PIPE_INTERRUPT &&
- urb->interval);
+ /* FIXME all this automagic resubmit logic will vanish
+ * later in the 2.5 series
+ */
+ killed = status == -ENOENT || status == -ECONNRESET;
+ resubmit_interrupt = usb_pipetype(urb->pipe) == PIPE_INTERRUPT
+ && !killed;
- status = urbp->status;
- if (!resubmit_interrupt || killed)
+ if (resubmit_interrupt)
+ usb_get_urb (urb);
+ else
/* We don't need urb_priv anymore */
uhci_destroy_urb_priv(uhci, urb);
@@ -1824,8 +1775,5 @@
spin_unlock_irqrestore(&urb->lock, flags);
- if (resubmit_interrupt)
- urb->complete(urb);
- else
- usb_hcd_giveback_urb(hcd, urb);
+ usb_hcd_giveback_urb(hcd, urb);
if (resubmit_interrupt)
@@ -1834,7 +1782,10 @@
killed = (urb->status == -ENOENT || urb->status == -ECONNRESET);
- if (resubmit_interrupt && !killed) {
- urb->dev = dev;
- uhci_reset_interrupt(uhci, urb);
+ if (resubmit_interrupt) {
+ if (!killed) {
+ urb->dev = dev;
+ usb_submit_urb (urb, SLAB_ATOMIC);
+ }
+ usb_get_urb (urb);
}
}
@@ -2042,10 +1993,15 @@
}
- for (i = 0; i < UHCI_NUM_SKELTD; i++)
- if (uhci->skeltd[i]) {
- uhci_free_td(uhci, uhci->skeltd[i]);
- uhci->skeltd[i] = NULL;
+ for (i = 0; i < UHCI_NUM_INTQH; i++)
+ if (uhci->int_qh[i]) {
+ uhci_free_qh(uhci, uhci->int_qh[i]);
+ uhci->int_qh[i] = NULL;
}
+ if (uhci->skel_term_td) {
+ uhci_free_td(uhci, uhci->skel_term_td);
+ uhci->skel_term_td = 0;
+ }
+
if (uhci->qh_pool) {
pci_pool_destroy(uhci->qh_pool);
@@ -2094,5 +2050,5 @@
unsigned io_size;
dma_addr_t dma_handle;
- struct usb_device *udev;
+ struct usb_device *udev = 0;
#ifdef CONFIG_PROC_FS
struct proc_dir_entry *ent;
@@ -2107,5 +2063,5 @@
err("couldn't create uhci proc entry");
retval = -ENOMEM;
- goto err_create_proc_entry;
+ goto cleanup;
}
@@ -2142,5 +2098,5 @@
if (!uhci->fl) {
err("unable to allocate consistent memory for frame list");
- goto err_alloc_fl;
+ goto cleanup;
}
@@ -2153,5 +2109,5 @@
if (!uhci->td_pool) {
err("unable to create td pci_pool");
- goto err_create_td_pool;
+ goto cleanup;
}
@@ -2160,9 +2116,12 @@
if (!uhci->qh_pool) {
err("unable to create qh pci_pool");
- goto err_create_qh_pool;
+ goto cleanup;
}
/* Initialize the root hub */
+#if 1
+ port = 2;
+#else
/* UHCI specs says devices must have 2 ports, but goes on to say */
/* they may have more but give no way to determine how many they */
@@ -2185,4 +2144,5 @@
port = 2;
}
+#endif
uhci->rh_numports = port;
@@ -2191,69 +2151,53 @@
if (!udev) {
err("unable to allocate root hub");
- goto err_alloc_root_hub;
+ goto cleanup;
}
- uhci->skeltd[0] = uhci_alloc_td(uhci, udev);
- if (!uhci->skeltd[0]) {
- err("unable to allocate TD 0");
- goto err_alloc_skeltd;
- }
-
- /*
- * 9 Interrupt queues; link int2 to int1, int4 to int2, etc
- * then link int1 to control and control to bulk
- */
- for (i = 1; i < 9; i++) {
- struct uhci_td *td;
-
- td = uhci->skeltd[i] = uhci_alloc_td(uhci, udev);
- if (!td) {
- err("unable to allocate TD %d", i);
- goto err_alloc_skeltd;
+ /* interrupt skeletons: shortest periods are shared at the end */
+ for (i = 0; i < UHCI_NUM_INTQH; i++) {
+ uhci->int_qh[i] = uhci_alloc_qh(uhci, udev);
+ if (!uhci->int_qh[i]) {
+ err("unable to allocate int_qh %d", i);
+ goto cleanup;
}
-
- uhci_fill_td(td, 0, uhci_explen(UHCI_NULL_DATA_SIZE) |
- (0x7f << TD_TOKEN_DEVADDR_SHIFT) | USB_PID_IN, 0);
- td->link = cpu_to_le32(uhci->skeltd[i - 1]->dma_handle);
- }
-
- uhci->skel_term_td = uhci_alloc_td(uhci, udev);
- if (!uhci->skel_term_td) {
- err("unable to allocate skel TD term");
- goto err_alloc_skeltd;
+ /* link int1 after int2, int2 after int4, etc */
+ if (i != 0)
+ uhci->int_qh[i]->link = UHCI_PTR_QH
+ | cpu_to_le32(uhci->int_qh[i-1]->dma_handle);
}
+ /* non-periodic skeletons: control (lowspeed, fullspeed), bulk */
for (i = 0; i < UHCI_NUM_SKELQH; i++) {
uhci->skelqh[i] = uhci_alloc_qh(uhci, udev);
if (!uhci->skelqh[i]) {
err("unable to allocate QH %d", i);
- goto err_alloc_skelqh;
+ goto cleanup;
}
+ /* link ls control after int1, bulk after control, ... */
+ if (i == 0)
+ uhci->int_qh[0]->link = UHCI_PTR_QH
+ | cpu_to_le32(uhci->skelqh[0]->dma_handle);
+ else
+ uhci->skelqh[i - 1]->link = UHCI_PTR_QH
+ | cpu_to_le32(uhci->skelqh[i]->dma_handle);
}
- uhci_fill_td(uhci->skel_int1_td, 0, (UHCI_NULL_DATA_SIZE << 21) |
- (0x7f << TD_TOKEN_DEVADDR_SHIFT) | USB_PID_IN, 0);
- uhci->skel_int1_td->link = cpu_to_le32(uhci->skel_ls_control_qh->dma_handle) |
UHCI_PTR_QH;
-
- uhci->skel_ls_control_qh->link =
cpu_to_le32(uhci->skel_hs_control_qh->dma_handle) | UHCI_PTR_QH;
- uhci->skel_ls_control_qh->element = UHCI_PTR_TERM;
-
- uhci->skel_hs_control_qh->link = cpu_to_le32(uhci->skel_bulk_qh->dma_handle) |
UHCI_PTR_QH;
- uhci->skel_hs_control_qh->element = UHCI_PTR_TERM;
-
- uhci->skel_bulk_qh->link = cpu_to_le32(uhci->skel_term_qh->dma_handle) |
UHCI_PTR_QH;
- uhci->skel_bulk_qh->element = UHCI_PTR_TERM;
-
- /* This dummy TD is to work around a bug in Intel PIIX controllers */
+ /* the terminating skeleton has a single always-disabled TD,
+ * as Intel's workaround for an FSBR bug in PIIX controllers.
+ */
+ uhci->skel_term_td = uhci_alloc_td(uhci, udev);
+ if (!uhci->skel_term_td) {
+ err("unable to allocate skel TD term");
+ goto cleanup;
+ }
uhci_fill_td(uhci->skel_term_td, 0, (UHCI_NULL_DATA_SIZE << 21) |
(0x7f << TD_TOKEN_DEVADDR_SHIFT) | USB_PID_IN, 0);
uhci->skel_term_td->link = cpu_to_le32(uhci->skel_term_td->dma_handle);
- uhci->skel_term_qh->link = UHCI_PTR_TERM;
uhci->skel_term_qh->element = cpu_to_le32(uhci->skel_term_td->dma_handle);
/*
- * Fill the frame list: make all entries point to
- * the proper interrupt queue.
+ * Fill the frame list: initialize all entries so they point
+ * to some interrupt queue, we may prepend iso later.
*
* This is probably silly, but it's a simple way to
@@ -2286,5 +2230,5 @@
/* Only place we don't use the frame list routines */
- uhci->fl->frame[i] = cpu_to_le32(uhci->skeltd[irq]->dma_handle);
+ uhci->fl->frame[i] = cpu_to_le32(uhci->int_qh[irq]->dma_handle);
}
@@ -2315,41 +2259,11 @@
del_timer_sync(&uhci->stall_timer);
- for (i = 0; i < UHCI_NUM_SKELQH; i++)
- if (uhci->skelqh[i]) {
- uhci_free_qh(uhci, uhci->skelqh[i]);
- uhci->skelqh[i] = NULL;
- }
-
-err_alloc_skelqh:
- for (i = 0; i < UHCI_NUM_SKELTD; i++)
- if (uhci->skeltd[i]) {
- uhci_free_td(uhci, uhci->skeltd[i]);
- uhci->skeltd[i] = NULL;
- }
+cleanup:
+ release_uhci (uhci);
-err_alloc_skeltd:
- usb_free_dev(udev);
+ if (udev)
+ usb_free_dev(udev);
hcd->self.root_hub = NULL;
-err_alloc_root_hub:
- pci_pool_destroy(uhci->qh_pool);
- uhci->qh_pool = NULL;
-
-err_create_qh_pool:
- pci_pool_destroy(uhci->td_pool);
- uhci->td_pool = NULL;
-
-err_create_td_pool:
- pci_free_consistent(hcd->pdev, sizeof(*uhci->fl), uhci->fl,
uhci->fl->dma_handle);
- uhci->fl = NULL;
-
-err_alloc_fl:
-#ifdef CONFIG_PROC_FS
- remove_proc_entry(hcd->self.bus_name, uhci_proc_root);
- uhci->proc_entry = NULL;
-
-err_create_proc_entry:
-#endif
-
return retval;
}
--- ./drivers-dist/usb/host/uhci-debug.c Wed Jul 24 21:43:02 2002
+++ ./drivers/usb/host/uhci-debug.c Fri Oct 11 13:02:17 2002
@@ -35,10 +35,11 @@
}
-static int inline uhci_is_skeleton_td(struct uhci_hcd *uhci, struct uhci_td *td)
+static int inline
+uhci_is_int_skeleton(struct uhci_hcd *uhci, void *qh)
{
int i;
- for (i = 0; i < UHCI_NUM_SKELTD; i++)
- if (td == uhci->skeltd[i])
+ for (i = 0; i < UHCI_NUM_INTQH; i++)
+ if (qh == uhci->int_qh[i])
return 1;
@@ -286,9 +287,8 @@
}
-static const char *td_names[] = {"skel_int1_td", "skel_int2_td",
- "skel_int4_td", "skel_int8_td",
- "skel_int16_td", "skel_int32_td",
- "skel_int64_td", "skel_int128_td",
- "skel_int256_td", "skel_term_td" };
+static const char *intqh_names[] = {"skel_int1_qh", "skel_int2_qh",
+ "skel_int4_qh", "skel_int8_qh",
+ "skel_int16_qh", "skel_int32_qh",
+ "skel_int64_qh", "skel_int128_qh"};
static const char *qh_names[] = { "skel_ls_control_qh", "skel_hs_control_qh",
"skel_bulk_qh", "skel_term_qh" };
@@ -300,8 +300,8 @@
}
-#define show_td_name() \
+#define show_int_name() \
if (!shown) { \
shown = 1; \
- out += sprintf(out, "- %s\n", td_names[i]); \
+ out += sprintf(out, "- %s\n", intqh_names[i]); \
}
@@ -317,5 +317,4 @@
int i;
struct uhci_qh *qh;
- struct uhci_td *td;
struct list_head *tmp, *head;
@@ -323,7 +322,9 @@
out += uhci_show_status(uhci, out, len - (out - buf));
- out += sprintf(out, "Frame List\n");
+ /* any iso tds will be at the head of the periodic schedule */
+ out += sprintf(out, "ISO Frame List\n");
for (i = 0; i < UHCI_NUMFRAMES; ++i) {
int shown = 0;
+ struct uhci_td *td;
td = uhci->fl->frame_cpu[i];
if (!td)
@@ -334,5 +335,5 @@
out += sprintf(out, " frame list does not match
td->dma_handle!\n");
}
- if (uhci_is_skeleton_td(uhci, td))
+ if (uhci_is_int_skeleton(uhci, td))
continue;
show_frame_num();
@@ -347,69 +348,38 @@
}
- out += sprintf(out, "Skeleton TD's\n");
- for (i = UHCI_NUM_SKELTD - 1; i >= 0; i--) {
+ /* interrupt qhs follow iso */
+ out += sprintf(out, "Interrupt QH's\n");
+ for (i = UHCI_NUM_INTQH; i-- > 0; ) {
int shown = 0;
+ u32 link;
- td = uhci->skeltd[i];
+ qh = uhci->int_qh[i];
if (debug > 1) {
- show_td_name();
- out += uhci_show_td(td, out, len - (out - buf), 4);
- }
-
- if (list_empty(&td->fl_list)) {
- /* TD 0 is the int1 TD and links to control_ls_qh */
- if (!i) {
- if (td->link !=
- (cpu_to_le32(uhci->skel_ls_control_qh->dma_handle)
| UHCI_PTR_QH)) {
- show_td_name();
- out += sprintf(out, " skeleton TD not
linked to ls_control QH!\n");
- }
- } else if (i < 9) {
- if (td->link != cpu_to_le32(uhci->skeltd[i -
1]->dma_handle)) {
- show_td_name();
- out += sprintf(out, " skeleton TD not
linked to next skeleton TD!\n");
- }
- } else {
- show_td_name();
-
- if (td->link != cpu_to_le32(td->dma_handle))
- out += sprintf(out, " skel_term_td does not
link to self\n");
-
- /* Don't show it twice */
- if (debug <= 1)
- out += uhci_show_td(td, out, len - (out -
buf), 4);
- }
-
- continue;
+ show_int_name();
+ out += uhci_show_qh(qh, out, len - (out - buf), 4);
}
- show_td_name();
-
- head = &td->fl_list;
- tmp = head->next;
-
- while (tmp != head) {
- td = list_entry(tmp, struct uhci_td, fl_list);
-
- tmp = tmp->next;
-
- out += uhci_show_td(td, out, len - (out - buf), 4);
+ list_for_each_entry (qh, &qh->list, list) {
+ show_int_name();
+ out += uhci_show_qh(qh, out, len - (out - buf), 4);
}
- if (!i) {
- if (td->link !=
- (cpu_to_le32(uhci->skel_ls_control_qh->dma_handle) |
UHCI_PTR_QH))
- out += sprintf(out, " last TD not linked to
ls_control QH!\n");
- } else if (i < 9) {
- if (td->link != cpu_to_le32(uhci->skeltd[i - 1]->dma_handle))
- out += sprintf(out, " last TD not linked to next
skeleton!\n");
- }
+ link = le32_to_cpu (qh->link & ~UHCI_PTR_QH);
+ if (link != ((i == 0)
+ ? uhci->skel_ls_control_qh->dma_handle
+ : uhci->int_qh[i - 1]->dma_handle))
+ out += sprintf(out, " int%d not linked to next skeleton!\n",
+ 1 << i);
}
+ /* non-periodic transfers are scheduled at the end of every
+ * frame: control (ls, fs) and bulk qhs, plus maybe an fsbr loop
+ */
out += sprintf(out, "Skeleton QH's\n");
for (i = 0; i < UHCI_NUM_SKELQH; ++i) {
int shown = 0;
+ u32 link;
qh = uhci->skelqh[i];
@@ -435,11 +405,10 @@
if (list_empty(&qh->list)) {
if (i < 3) {
- if (qh->link !=
- (cpu_to_le32(uhci->skelqh[i + 1]->dma_handle) |
UHCI_PTR_QH)) {
+ link = le32_to_cpu (qh->link & ~UHCI_PTR_QH);
+ if (link != uhci->skelqh[i + 1]->dma_handle) {
show_qh_name();
out += sprintf(out, " skeleton QH not
linked to next skeleton QH!\n");
}
}
-
continue;
}
@@ -459,6 +428,6 @@
if (i < 3) {
- if (qh->link !=
- (cpu_to_le32(uhci->skelqh[i + 1]->dma_handle) |
UHCI_PTR_QH))
+ link = le32_to_cpu (qh->link & ~UHCI_PTR_QH);
+ if (link != uhci->skelqh[i + 1]->dma_handle)
out += sprintf(out, " last QH not linked to next
skeleton!\n");
}