On Wednesday 16 May 2007, Hans Petter Selasky wrote:
> Hi,
> 
> I'm currently working on a Linux USB emulation layer for FreeBSD. In that 
> regard I have some issues that makes the stack peform non-optimal due to its 
> API design.
> 
>...
> 
> What I would suggest is that when you allocate an URB and DMA'able memory, 
> you 
> have to specify which pipe {CONTROL, BULK, INTERRUPT or ISOC} it belongs to.
> 
> What do you think?

Riddle me this:  When should Linux choose to adopt a design mistake
made by a non-Linux driver stack?


> The reason is that in the new USB stack on FreeBSD, the USB transfer 
> descriptors are allocated along with the data-buffer,

Whereas on Linux, data buffers are not normally bound to a particular
driver stack (such as USB).  That matches normal hardware usage, and
provides a less restrictive system which minimizes routine requirements
to copy data.  (And thus, structural performance limits.)


> so that when you  
> unsetup an USB transfer, absolutely all memory related to a transfer is 
> freed. This also has a security implication

Calling this "security" seems like quite a stretch to me.  Systems
that don't behave when buffers are exhausted are buggy, sure.  And
marginal behavior is always hard to test and debug; and bugs are
always a potential home for exploits (including DOS).  But this has
no more security implications than any other tradeoff.


> in that when you have  
> pre-allocated all buffers and all USB host controller descriptors, you will 
> never get in the situation of not being able to allocate transfer descriptors 
> on the fly, like done on Linux.

That's not a failure mode that's been often observed on Linux.  (Never,
in my own experience... which I admit has not focussed on that particular
type of stress load.)  So it's hard to argue that it should motivate a
redesign of core APIs.

Transfer descriptors are an artifact of one kind of host controller;
it'd be wrong to assume all HCDs use them.

The related issue that's been discussed is how to shrink submit paths,
giving lower overhead.

Submitting URBs directly to endpoints would remove lots of dispatch
logic.  Pre-allocating some TDs would remove logic too, but implies
changing the URB lifecycle.  The peripheral/"gadget" API allows for
both of those optimizations, but adopting them on the host side would
not be particularly easy because of the "how to migrate all drivers"
problem.  I'd expect submit-to-endpoints to be adopted more easily,
since the low level HCD primitives already work that way.

- Dave

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
linux-usb-devel@lists.sourceforge.net
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to