On Wed, Jun 15, 2022 at 11:09:31AM +0100, Richard W.M. Jones wrote: > This kernel setting limits the maximum request size on the queue. > > In my testing reading and writing files with the default [above] the > kernel never got anywhere near sending multi-megabyte requests. In > fact the largest request it sent was 128K, even when I did stuff like: > > # dd if=/dev/zero of=/tmp/mnt/zero bs=100M count=10 > > 128K happens to be 2 x blk_queue_io_opt, but I need to do more testing > to see if that relationship always holds.
The answer is apparently no. With minimum_io_size == 64K and optimal_io_size == 256K, the server still only sees at most 128K requests. Although I still think we need to make these changes to nbd.ko, I don't think this is going to solve the original problem of trying to aggregate requests into the very large block sizes favoured by S3. (nbdkit blocksize filter + a layer of caching seems like the way to go for that) Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html _______________________________________________ Libguestfs mailing list Libguestfs@redhat.com https://listman.redhat.com/mailman/listinfo/libguestfs