Hi all,

I ran into an issue against drives just shy of 16TiB with mkfs.lustre, which 
appears to relate to how resize is employed by mkfs.lustre when it calls mke2fs.

I've opened this:
https://jira.whamcloud.com/browse/LU-16305
Side note: how do I assign something to myself?  I have a fix but can't find 
any buttons on JIRA that allow me to pick up a bug I opened for myself.

My fix bounds disk capacity 1MiB below the specified resize value if your disk 
falls into the problem range of (16TiB-32GiB) to (16TiB-1B), but I wanted to 
better understand what we're trying to accomplish with the extended option 
"resize."

My understanding of resize in the mke2fs context is that it reserves extra 
space in the block descriptor table such that you could extend ext*/ldiskfs 
down the road up to the given resize block count.  However, in 
libmount_utils_ldiskfs.c Lustre's use of it seems like an optimization I don't 
quite understand:

    /* In order to align the filesystem metadata on 1MB boundaries,
     * give a resize value that will reserve a power-of-two group
     * descriptor blocks, but leave one block for the superblock.
     * Only useful for filesystems with < 2^32 blocks due to resize
     * limitations.

The comment makes it sound like resize varies with the device size, but it 
currently only varies with block size (for a 4KB block size it's always 
4290772992).

Does anybody know what is this optimization attempting to achieve, and what 
motivated it since this doesn't seem related in the least to actually resizing 
the drive?  Since most spinners are north of 16TiB nowadays, this optimization 
won't be enabled for them -- is that concerning?

Best,

ellis
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to