I'm still working on this ethernet driver for a USB ADSL modem. Its
going pretty good, it works for the most part. I followed the model used
by most other drivers, with one read URB for incoming data. I set the
size of this urb's transfer_buffer to 3392, which is 64*53 - the least
common multiple of a USB transfer (64 bytes) and an ATM cell (53 bytes).
We do this on the Windows and Macintosh platforms as well. I'm
expecting, therefore, to always get an integral number of ATM cells in
my incoming data. However, if I do large pings (for this example, I'm
pinging the machine with a 4k ping size) into the machine, all of
the data doesn't make it into my read urb. I'm assuming (for sake of
discussion) that the time it takes me to process the first chunk of data
(since the data is > than my transfer buffer, it has to come in multiple
chunks) is too long and the rest of the data gets dropped? I don't see
any errors from the usb or usb-uhci modules about data being dropped.
But, I'm definitely not getting the full set of cells that makes up the
whole 4k ping. So, I tried switching to a model where we queue multiple
bulk read urbs. With this model, my urbs have smaller sizes, but they
are often not sized relative to the ATM cell size. How is this all
supposed to work - how does the usb code decide how much data to put
into an urb? 

Maybe my explanation is not clear. Tomorrow I'm going to put this on the
CATC and see what the data looks like on the bus when running under
Linux and when running on another platform (that works with large
pings).

Thanks,
-Chris

_______________________________________________
[EMAIL PROTECTED]
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to