在 2024/8/24 01:08, Matthew Wilcox 写道:
On Fri, Aug 23, 2024 at 11:43:41AM +0930, Qu Wenruo wrote:
在 2024/8/23 07:55, Qu Wenruo 写道:
在 2024/8/22 21:37, Matthew Wilcox 写道:
On Thu, Aug 22, 2024 at 08:28:09PM +0930, Qu Wenruo wrote:
But what will happen if some writes happened to that larger folio?
Do MM layer detects that and split the folios? Or the fs has to go the
subpage routine (with an extra structure recording all the subpage flags
bitmap)?

Entirely up to the filesystem.  It would help if btrfs used the same
terminology as the rest of the filesystems instead of inventing its own
"subpage" thing.  As far as I can tell, "subpage" means "fs block size",
but maybe it has a different meaning that I haven't ascertained.

Then tell me the correct terminology to describe fs block size smaller
than page size in the first place.

"fs block size" is not good enough, we want a terminology to describe
"fs block size" smaller than page size.

Oh dear.  btrfs really has corrupted your brain.  Here's the terminology
used in the rest of Linux:

SECTOR_SIZE.  Fixed at 512 bytes.  This is the unit used to communicate
with the block layer.  It has no real meaning, other than Linux doesn't
support block devices with 128 and 256 byte sector sizes (I have used
such systems, but not in the last 30 years).

LBA size.  This is the unit that the block layer uses to communicate
with the block device.  Must be at least SECTOR_SIZE.  I/O cannot be
performed in smaller chunks than this.

Physical block size.  This is the unit that the device advertises as
its efficient minimum size.  I/Os smaller than this / not aligned to
this will probably incur a performance penalty as the device will need
to do a read-modify-write cycle.

fs block size.  Known as s_blocksize or i_blocksize.  Must be a multiple
of LBA size, but may be smaller than physical block size.  Files are
allocated in multiples of this unit.

PAGE_SIZE.  Unit that memory can be mapped in.  This applies to both
userspace mapping of files as well as calls to kmap_local_*().

folio size.  The size that the page cache has decided to manage this
chunk of the file in.  A multiple of PAGE_SIZE.


I've mostly listed this in smallest to largest.  The relationships that
must be true:

SS <= LBA <= Phys
LBA <= fsb
PS <= folio
fsb <= folio

ocfs2 supports fsb > PAGE_SIZE, but this is a rarity.  Most filesystems
require fsb <= PAGE_SIZE.

Filesystems like UFS also support a fragment size which is less than fs
block size.  It's kind of like tail packing.  Anyway, that's internal to
the filesystem and not exposed to the VFS.

I know all these things, the terminology I need is a short one to
describe fsb < PAGE_SIZE case.

So far, in the fs' realm, "subpage" (sub page sized block size) is the
shortest and simplest one.

Sure you will get confused with a "subpage" range  inside a page, but
that's because you're mostly working in the MM layer.

So please give me a better alternative to describe exact "fbs <
PAGE_SIZE" case.
Or it's just complaining without any constructive advice.


I have no idea why btrfs thinks it needs to track writeback, ordered,
checked and locked in a bitmap.  Those make no sense to me.  But they
make no sense to me if you're support a 4KiB filesystem on a machine
with a 64KiB PAGE_SIZE, not just in the context of "larger folios".
Writeback is something the VM tells you to do; why do you need to tag
individual blocks for writeback?

Because there are cases where btrfs needs to only write back part of the
folio independently.

iomap manages to do this with only tracking per-block dirty bits.

Well, does iomap support asynchronous compression?

This proves the point of Josef, different people have different focus,
please do not assume everyone knows the realm you're doing, nor assume
there will always be a one-fit-all solution.


And especially for mixing compression and non-compression writes inside
a page, e.g:

        0     16K     32K     48K      64K
        |//|          |///////|
           4K

In above case, if we need to writeback above page with 4K sector size,
then the first 4K is not suitable for compression (result will still
take a full 4K block), while the range [32K, 48K) will be compressed.

In that case, [0, 4K) range will be submitted directly for IO.
Meanwhile [32K, 48) will be submitted for compression in antoher wq.
(Or time consuming compression will delay the writeback of the remaining
pages)

This means the dirty/writeback flags will have a different timing to be
changed.

Just in case if you mean using an atomic to trace the writeback/lock
progress, then it's possible to go that path, but for now it's not space
efficient.

For 16 blocks per page case (4K sectorsize 64K page size), each atomic
takes 4 bytes while a bitmap only takes 2 bytes.

And for 4K sectorsize 16K page size case, it's worse and btrfs compact
all the bitmaps into a larger one to save more space, while each atomic
still takes 4 bytes.

Sure, but it doesn't scale up well.  And it's kind of irrelevant whether
you occupy 2 or 4 bytes at the low end because you're allocating all
this through slab, so you get rounded to 8 bytes anyway.
iomap_folio_state currently occupies 12 bytes + 2 bits per block.  So
for a 16 block folio (4k in 64k), that's 32 bits for a total of 16
bytes.  For a 2MB folio on a 4kB block size fs, that's 1024 bits for
a total of 140 bytes (rounded to 192 bytes by slab).

Yes it's not scalable for all folio sizes, but the turning point is 32
bits, which means 128K folio for 4K page sized system.
Since the folio code already consider order > 3 as costly, I'm totally
fine to sacrifice the higher order ones, not the other way around.

Although the real determining factor is the real world distribution of
folio sizes.

But for now, since btrfs only supports 4K block size with 64K/16K page
size, it's still a win for us.

Another point of the bitmap is, it helps (at least for me) a lot for
debugging, but that can always be hidden behind some debug flag.


I'm not denying the possibility to fully migrate to the iomap way, but
that will need a lot of extra work like clean up the cow_fixup thing to
reduce the extra page flag tracking first.
(That always causes a lot of discussion but seldomly leads to patches)

Thanks,
Qu

Hm, it might be worth adding a kmalloc-160, we'd get 25 objects per 4KiB
page instead of 21 192-byte objects ...




_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to