On Thu, Jul 3, 2025 at 6:35 PM Christoph Hellwig <h...@lst.de> wrote:
>
> On Wed, Jul 02, 2025 at 11:43:12AM -0700, Darrick J. Wong wrote:
> > > On a spinning disk, random IO bandwidth remains unchanged, while 
> > > sequential
> > > IO performance declines. However, setting nr_wb_ctx = 1 via configurable
> > > writeback(planned in next version) eliminates the decline.
> > >
> > > echo 1 > /sys/class/bdi/8:16/nwritebacks
> > >
> > > We can fetch the device queue's rotational property and allocate BDI with
> > > nr_wb_ctx = 1 for rotational disks. Hope this is a viable solution for
> > > spinning disks?
> >
> > Sounds good to me, spinning rust isn't known for iops.
> >
> > Though: What about a raid0 of spinning rust?  Do you see the same
> > declines for sequential IO?
>
> Well, even for a raid0 multiple I/O streams will degrade performance
> on a disk.  Of course many real life workloads will have multiple
> I/O streams anyway.
>
> I think the important part is to have:
>
>  a) sane defaults
>  b) an easy way for the file system and/or user to override the default
>
> For a) a single thread for rotational is a good default.  For file system
> that driver multiple spindles independently or do compression multiple
> threads might still make sense.
>
> For b) one big issue is that right now the whole writeback handling is
> per-bdi and not per superblock.  So maybe the first step needs to be
> to move the writeback to the superblock instead of bdi?

bdi is tied to the underlying block device, and helps for device
bandwidth specific throttling, dirty ratelimiting etc. Making it per
superblock will need duplicating the device specific throttling, ratelimiting
to superblock, which will be difficult.

> If someone
> uses partitions and multiple file systems on spinning rusts these
> days reducing the number of writeback threads isn't really going to
> save their day either.
>

in this case with single wb thread multiple partitions/filesystems use the
same bdi, we fall back to base case, will that not help ?


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to