Agreed, shattering a read/ write stream into miniscule pieces improves interop at small cost to typical usage of much storage.
It improves interop, yes ... but that cost would only be small for full speed devices. It's called "de-tuning", or "pessimizing" (contrast "optimizing").
For devices that work properly -- like the high speed ISD bridges, Western Digital drives I've tried, and many others -- it's better not to force use of mini-transactions. Make whitelisted hardware run at its natural speed by default: no fragment limits, use the system's low level flow control mechanisms as they're intended.
I've certainly seen I/O queues over 400 KBytes work just fine, at sustained rates ... and much lower system overhead (CPU time, IRQ rate, wasted bandwidth) than with mini-transactions.
- Dave
Sorry I mistook the words "I wonder if we shouldn't reduce max_sectors permanently" as a disavowal of your cogent discussion:
We should let people who want more performance tweak the fragment size up
(to a reasonably large limit) if they want to try -- this could lead to
device failures or better performance. Letting people shoot themselves in
the foot is a long-standing tradition in Linux, but I don't want to leave
the 'non-power users' out in the cold.
Pat LaVarre
------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ [EMAIL PROTECTED] To unsubscribe, use the last form field at: https://lists.sourceforge.net/lists/listinfo/linux-usb-devel