Am 03.09.2014 um 16:48 schrieb ronnie sahlberg <ronniesahlb...@gmail.com>:
> On Wed, Sep 3, 2014 at 7:18 AM, Paolo Bonzini <pbonz...@redhat.com> wrote: >> Il 03/09/2014 16:17, ronnie sahlberg ha scritto: >>> I think (a) would be best. >>> But I would suggest some small modifications: >>> >>> Set the default max to something even smaller, like 256 sectors. This >>> should not have much effect on throughput since the client/initiator >>> can just send several concurrent i/os instead of one large i/o. >> >> I disagree. 256 sectors are 128k, that's a fairly normal I/O size. >> > > Fair enough. > But maybe a command line argument to set/override the max? This is a good approach. When introducing max_xfer_length to bs->bl I would try to have as less impact as possible. Meaning I want to have at best no changes to current behavior I just want to fix things that otherwise would have failed. > > This would be useful when using scsi-passthrough to usb-scsi disks. > Linux kernel has I think a hard limit on 120k for the usb transport, > which means when doing SG_IO to a usb disk you are limited to this > as the max transfer length since anything larger will break in the usb layer. > This limit also I think is not easily discoverable by an application since > * the actual device still reports, in most cases, max_transfer_length > == unlimited > * and it is semi-hard to discover whether a /dev/sg* device is on a > usb bus or not. This is good for debugging but I personally need to fix this for a production system. USB attached SCSI disks is not likely a production thing ;-) One other though regarding the multiwrite_merge. Does it make sense to merge requests spanning multiple clusters/sectors? When I was looking at what Kevin tried to fix when he introduced that multiwrite_merge thing back in 2009 I am still not sure that it makes sense to merge as much as possible. But back then the info of the cluster_size was not available for most protocols except QCOW2 I think. Peter