Am 01.07.2025 um 13:44 hat Hanna Czenczek geschrieben:
> We probably want to support larger write sizes than just 4k; 64k seems
> nice.  However, we cannot read partial requests from the FUSE FD, we
> always have to read requests in full; so our read buffer must be large
> enough to accommodate potential 64k writes if we want to support that.
> 
> Always allocating FuseRequest objects with 64k buffers in them seems
> wasteful, though.  But we can get around the issue by splitting the
> buffer into two and using readv(): One part will hold all normal (up to
> 4k) write requests and all other requests, and a second part (the
> "spill-over buffer") will be used only for larger write requests.  Each
> FuseQueue has its own spill-over buffer, and only if we find it used
> when reading a request will we move its ownership into the FuseRequest
> object and allocate a new spill-over buffer for the queue.
> 
> This way, we get to support "large" write sizes without having to
> allocate big buffers when they aren't used.
> 
> Also, this even reduces the size of the FuseRequest objects because the
> read buffer has to have at least FUSE_MIN_READ_BUFFER (8192) bytes; but
> the requests we support are not quite so large (except for >4k writes),
> so until now, we basically had to have useless padding in there.
> 
> With the spill-over buffer added, the FUSE_MIN_READ_BUFFER requirement
> is easily met and we can decrease the size of the buffer portion that is
> right inside of FuseRequest.
> 
> As for benchmarks, the benefit of this patch can be shown easily by
> writing a 4G image (with qemu-img convert) to a FUSE export:
> - Before this patch: Takes 25.6 s (14.4 s with -t none)
> - After this patch: Takes 4.5 s (5.5 s with -t none)
> 
> Reviewed-by: Stefan Hajnoczi <[email protected]>
> Signed-off-by: Hanna Czenczek <[email protected]>

The commit message seems outdated, there is no such thing as a
FuseRequest object.

I agree with the idea of allocating a separate buffer for the data to be
written. I'm not so sure that the approach taken here with combining an
in-place and a spillover buffer does actually do much for us in exchange
for the additional complexity.

The allocation + memcpy for in_place buf in fuse_co_write() bothers me a
bit. I'd rather have a buffer for the data to write that can be directly
used. And let's be real, we already allocate a 1 MB stack per request. I
don't think 64k more or less make a big difference, but it would allow
us to save the memcpy() for 4k requests and additionally an allocation
for larger requests.

The tradeoff when you use an iov for the buffer in FuseQueue that is
only big enough for the header and fuse_write_in and then directly the
per-request buffer that is owned by the coroutine is that for requests
that are larger than fuse_write_in, you'll have to copy the rest back
from the data buffer first. This seems to be only fuse_setattr_in, which
shouldn't be a hot path at all, and only a few bytes.

All that said, what you have is an obvious improvement over limiting
write requests to 4k.

Kevin


Reply via email to