On Tue, 20 Sep 2016, Felipe Balbi wrote: > And here's trace output (complete, scroll to bottom). It seems to me > like the thread didn't wake up at all. It didn't even try to > execute. I'll add some more traces and try to get better information > about what's going on. > > > > # tracer: nop > # > # entries-in-buffer/entries-written: 2865/2865 #P:4 > # > # _-----=> irqs-off > # / _----=> need-resched > # | / _---=> hardirq/softirq > # || / _--=> preempt-depth > # ||| / delay > # TASK-PID CPU# |||| TIMESTAMP FUNCTION > # | | | |||| | |
Skipping to the end... > irq/17-dwc3-2522 [002] d... 43.504199: bulk_out_complete: 0, 31/31 > file-storage-2521 [001] .... 43.504202: fsg_main_thread: > get_next_command -> 0 > file-storage-2521 [001] .... 43.504288: fsg_main_thread: > do_scsi_command -> 0 > file-storage-2521 [001] .... 43.504295: fsg_main_thread: finish_reply > -> 0 > file-storage-2521 [001] .... 43.504298: fsg_main_thread: send_status > -> 0 > irq/17-dwc3-2522 [002] d... 43.504347: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504351: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504434: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504438: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504535: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504539: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504618: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504703: bulk_in_complete: 0, > 16384/16384 > irq/17-dwc3-2522 [002] d... 43.504794: bulk_in_complete: 0, 13/13 > irq/17-dwc3-2522 [002] d... 43.504797: bulk_out_complete: 0, 31/31 Like you say, it appears that the thread didn't get woken up at all. But this is inconsistent with your earlier results. On Sep. 9, you posted a message that ended with these lines: > irq/17-dwc3-3579 [003] d..1 21167.729666: bulk_out_complete: compl: bh > ffff880111e6aac0 state 1 > file-storage-3578 [002] .... 21167.729670: fsg_main_thread: next: bh > ffff880111e6aac0 state 1 This indicates that in the earlier test, the thread did start running and get_next_command should have returned. The trace you posted after this one doesn't seem to show anything new, as far as I can tell. So I still can't tell what's happening. Maybe the patch below will help. It concentrates on the critical area. Alan Stern Index: usb-4.x/drivers/usb/gadget/function/f_mass_storage.c =================================================================== --- usb-4.x.orig/drivers/usb/gadget/function/f_mass_storage.c +++ usb-4.x/drivers/usb/gadget/function/f_mass_storage.c @@ -402,8 +402,12 @@ static void wakeup_thread(struct fsg_com smp_wmb(); /* ensure the write of bh->state is complete */ /* Tell the main thread that something has happened */ common->thread_wakeup_needed = 1; - if (common->thread_task) - wake_up_process(common->thread_task); + if (common->thread_task) { + if (wake_up_process(common->thread_task)) + trace_printk("wakeup_thread 1\n"); + else + trace_printk("wakeup_thread 0\n"); + } } static void raise_exception(struct fsg_common *common, enum fsg_state new_state) @@ -479,6 +483,8 @@ static void bulk_out_complete(struct usb req->status, req->actual, bh->bulk_out_intended_length); if (req->status == -ECONNRESET) /* Request was cancelled */ usb_ep_fifo_flush(ep); + trace_printk("bulk_out: %d, %u/%u\n", req->status, req->actual, + bh->bulk_out_intended_length); /* Hold the lock while we update the request and buffer states */ smp_wmb(); @@ -2204,13 +2210,17 @@ static int get_next_command(struct fsg_c /* Wait for the CBW to arrive */ while (bh->state != BUF_STATE_FULL) { + trace_printk("get_next: sleep\n"); rc = sleep_thread(common, true); + trace_printk("get_next: sleep -> %d loop %d\n", + rc, bh->state != BUF_STATE_FULL); if (rc) return rc; } smp_rmb(); rc = fsg_is_set(common) ? received_cbw(common->fsg, bh) : -EIO; bh->state = BUF_STATE_EMPTY; + trace_printk("get_next: return %d\n", rc); return rc; } -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html