On Sun, Apr 23, 2023 at 09:34:39AM +0200, The MH wrote:
> Package: e2fsprogs
> Version: 1.46.5
> Severity: minor
> 
> I did not find this bug in the patchnotes for the latest versions on
> e2fsprogs.sourceforge.net/e2fsprogs-release.html, so I assume it is
> still present.
> 
> I stumbled upon this, because I wanted to specifiy -i 768k for my main
> data drive (a 2 TB hard drive) as kind of less "aggressive" option
> than -i 1M or -T largefile.
> 
> I proceeded to test this behaviour against a file container with
> exactly 4 GiB. From what I know the value should increment in steps of
> 512 for every 64k as this is the boundary for one inode block per
> block group which ranges from 16 to 8 in this scenario.
> 
> -i 512k: number of inodes: expected 8192 actual 8192 -> ok
> -i 576k: number of inodes: expected 7680 actual 7680 -> ok
> 
> -i 640k: number of inodes: expected 7168 actual 6656

I'm not sure how you are calculating the "expected" value, but the
inode ratio is literally that.  So when you have an inode ratio of
640k, we take the total size of the file system, and divide it by the
inode ratio to determine the target number of inodes.  So the target is

4GiB / 640k or 4294967296 / 655360 or 6553.60.  This gets rounded up
to 6556.

> -i 704k: number of inodes: expected 6656 actual 6144

And...

4 GiB / 704k or 4294967296 / 720896 or 5957.78

... which gets rounded up to 6144.  So how does the rounding up
happen?  Well, a 4GiB file system, using a 4k block size, has 1048576
4k blocks.  Since there are 32,768 blocks in a block group, that means
there are 32 block groups.  So that means we need roughly 186.18
inodes per block group.  Since the default inode size is 256 bytes,
that means we can have 16 inodes per 4k block, and if we had 11 inode
table blocks per block group, that would work out to be 176 inodes per
block group, which is too small (it's less than 186 inodes), and 12
inode table blocks works out to 192 inodes per block group.  And 32
block groups times 192 inodes per block group works out to 6144, which
is the actual number that you've seen.

So this isn't a bug in mke2fs, but rather an apparent misunderstanding
of how the inode ratio is used.  In general, the basic idea is if the
system administrator thinks that the average size of the inodes is,
say, 704k, we need to make sure that there are at *least* 5958 inodes
(that is, 4 GiB / 704k and since the .78 in 5957.78 makes no sense,
that gets rounded to 5958).  But we need to have an integral number of
inode table blocks, and there's no point in having a partially filled
inode table block, and that's why it ends up being 6144 inodes.  I'm
not at all sure how you came up with your expected 6656 inodes,
consider that 6656 * 704k is 4.47 GiB, which is quite a bit more than
4GiB.

Cheers,

                                                - Ted

Reply via email to