Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 2018/1/24 10:39, Jaegeuk Kim wrote: > On 01/24, Chao Yu wrote: >> On 2018/1/24 10:22, Jaegeuk Kim wrote: >>> On 01/24, Chao Yu wrote: On 2018/1/24 6:19, Jaegeuk Kim wrote: > On 01/23, Jaegeuk Kim wrote: >> On 01/23, Chao Yu wrote: >>> On 2018/1/23 7:00, Jaegeuk Kim wrote: On 01/17, Chao Yu wrote: > Hi Jaegeuk, > > On 2018/1/17 8:47, Jaegeuk Kim wrote: >> Hi Chao, >> >> On 01/15, Chao Yu wrote: >>> Previously, our total node number (nat_bitmap) and total nat >>> segment count >>> will not monotonously increase along with image size, and max >>> nat_bitmap size >>> is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + >>> 1", it is >>> with bad scalability when user wants to create more inode/node in >>> larger image. >>> >>> So this patch tries to relieve the limitation, by default, >>> limitting total nat >>> entry number with 20% of total block number. >>> >>> Before: >>> image_size(GB) nat_bitmap sit_bitmap nat_segment >>> sit_segment >>> 16 383664 36 2 >>> 32 383664 72 2 >>> 64 3772128 116 4 >>> 128 3708192 114 6 >>> 256 3580320 110 10 > > As you see, nat_segment count will reduce when image size increases > starting from 64GB, that means nat segment count will not monotonously > increase when image size is increasing, so it would be better to > active > this when image size is larger than 32GB? > > IMO, configuring basic nid ratio to fixed value like ext4 ("free > inode" : > "free block" is about 1 : 4) would be better: > a. It will be easy for user to predict nid count or nat segment count > with > fix-sized image; > b. If user wants to reserve more nid count, we can support -N option > in > mkfs.f2fs to specify total nid count as user wish. My concern is about a CTS failure in terms of # of free inodes. >>> >>> You mean testSaneInodes()? >>> >>> final long maxsize = stat.f_blocks * stat.f_frsize; >>> final long maxInodes = maxsize / 4096; >>> final long minsize = stat.f_bavail * stat.f_frsize; >>> final long minInodes = minsize / 32768; >>> >>> The range is about [1/8, 1], so our 20% threshold can just let it >>> passed, >>> right? >> >> Yes, thanks for checking the codes. Let me play with this for some time. > > It simply triggers a panic, if kernel does not have the patch to detect > the > feature. Hmm... Yes, because we have changed disk layout of nat/sit_version_bitmap in mkfs.f2fs, if kernel can not detect that, we will encounter panic simply. >>> >>> , which means we can't do this by default at least. >> >> Oh, right, we need to consider to keep backward compactibility for old >> kernel in mkfs.f2fs by default, what about adding a new option to enable >> this just for new kernel? > > I guess it'd be possible, and we must warn the user when setting this. Agreed, let me update this patch for this. Thanks, > >> >> Thanks, >> >>> Thanks, > > Thanks, > >> >> Thanks, >> >>> >>> Thanks, >>> Thanks, > > How do you think? > > Thanks, > >>> 512 3260640 100 20 >>> 10242684121682 >>> 38 >>> 20481468243244 >>> 76 >>> 409639004800120 >>> 150 >>> >>> After: >>> image_size(GB) nat_bitmap sit_bitmap nat_segment >>> sit_segment >>> 16 256 64 8 2 >>> 32 512 64 16 2 >>> 64 960 128 30 4 >>> 128 1856192 58 6 >>> 256 3712320 116 10 >> >> Can we activate this, if size is larger than 256GB or something >> around that? >> >> Thanks, >> >>> 512 7424640 232 20 >>> 102414787 1216462 >>> 38 >>>
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 01/24, Chao Yu wrote: > On 2018/1/24 10:22, Jaegeuk Kim wrote: > > On 01/24, Chao Yu wrote: > >> On 2018/1/24 6:19, Jaegeuk Kim wrote: > >>> On 01/23, Jaegeuk Kim wrote: > On 01/23, Chao Yu wrote: > > On 2018/1/23 7:00, Jaegeuk Kim wrote: > >> On 01/17, Chao Yu wrote: > >>> Hi Jaegeuk, > >>> > >>> On 2018/1/17 8:47, Jaegeuk Kim wrote: > Hi Chao, > > On 01/15, Chao Yu wrote: > > Previously, our total node number (nat_bitmap) and total nat > > segment count > > will not monotonously increase along with image size, and max > > nat_bitmap size > > is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + > > 1", it is > > with bad scalability when user wants to create more inode/node in > > larger image. > > > > So this patch tries to relieve the limitation, by default, > > limitting total nat > > entry number with 20% of total block number. > > > > Before: > > image_size(GB) nat_bitmap sit_bitmap nat_segment > > sit_segment > > 16 383664 36 2 > > 32 383664 72 2 > > 64 3772128 116 4 > > 128 3708192 114 6 > > 256 3580320 110 10 > >>> > >>> As you see, nat_segment count will reduce when image size increases > >>> starting from 64GB, that means nat segment count will not monotonously > >>> increase when image size is increasing, so it would be better to > >>> active > >>> this when image size is larger than 32GB? > >>> > >>> IMO, configuring basic nid ratio to fixed value like ext4 ("free > >>> inode" : > >>> "free block" is about 1 : 4) would be better: > >>> a. It will be easy for user to predict nid count or nat segment count > >>> with > >>> fix-sized image; > >>> b. If user wants to reserve more nid count, we can support -N option > >>> in > >>> mkfs.f2fs to specify total nid count as user wish. > >> > >> My concern is about a CTS failure in terms of # of free inodes. > > > > You mean testSaneInodes()? > > > > final long maxsize = stat.f_blocks * stat.f_frsize; > > final long maxInodes = maxsize / 4096; > > final long minsize = stat.f_bavail * stat.f_frsize; > > final long minInodes = minsize / 32768; > > > > The range is about [1/8, 1], so our 20% threshold can just let it > > passed, > > right? > > Yes, thanks for checking the codes. Let me play with this for some time. > >>> > >>> It simply triggers a panic, if kernel does not have the patch to detect > >>> the > >>> feature. Hmm... > >> > >> Yes, because we have changed disk layout of nat/sit_version_bitmap in > >> mkfs.f2fs, if kernel can not detect that, we will encounter panic simply. > > > > , which means we can't do this by default at least. > > Oh, right, we need to consider to keep backward compactibility for old > kernel in mkfs.f2fs by default, what about adding a new option to enable > this just for new kernel? I guess it'd be possible, and we must warn the user when setting this. > > Thanks, > > > > >> > >> Thanks, > >> > >>> > >>> Thanks, > >>> > > Thanks, > > > > > Thanks, > > > >> > >> Thanks, > >> > >>> > >>> How do you think? > >>> > >>> Thanks, > >>> > > 512 3260640 100 20 > > 10242684121682 > > 38 > > 20481468243244 > > 76 > > 409639004800120 > > 150 > > > > After: > > image_size(GB) nat_bitmap sit_bitmap nat_segment > > sit_segment > > 16 256 64 8 2 > > 32 512 64 16 2 > > 64 960 128 30 4 > > 128 1856192 58 6 > > 256 3712320 116 10 > > Can we activate this, if size is larger than 256GB or something > around that? > > Thanks, > > > 512 7424640 232 20 > > 102414787 1216462 > > 38 > > 204829504 2432922 > > 76 > > 4096
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 2018/1/24 10:22, Jaegeuk Kim wrote: > On 01/24, Chao Yu wrote: >> On 2018/1/24 6:19, Jaegeuk Kim wrote: >>> On 01/23, Jaegeuk Kim wrote: On 01/23, Chao Yu wrote: > On 2018/1/23 7:00, Jaegeuk Kim wrote: >> On 01/17, Chao Yu wrote: >>> Hi Jaegeuk, >>> >>> On 2018/1/17 8:47, Jaegeuk Kim wrote: Hi Chao, On 01/15, Chao Yu wrote: > Previously, our total node number (nat_bitmap) and total nat segment > count > will not monotonously increase along with image size, and max > nat_bitmap size > is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", > it is > with bad scalability when user wants to create more inode/node in > larger image. > > So this patch tries to relieve the limitation, by default, limitting > total nat > entry number with 20% of total block number. > > Before: > image_size(GB)nat_bitmap sit_bitmap nat_segment > sit_segment > 16383664 36 2 > 32383664 72 2 > 643772128 116 4 > 128 3708192 114 6 > 256 3580320 110 10 >>> >>> As you see, nat_segment count will reduce when image size increases >>> starting from 64GB, that means nat segment count will not monotonously >>> increase when image size is increasing, so it would be better to active >>> this when image size is larger than 32GB? >>> >>> IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" >>> : >>> "free block" is about 1 : 4) would be better: >>> a. It will be easy for user to predict nid count or nat segment count >>> with >>> fix-sized image; >>> b. If user wants to reserve more nid count, we can support -N option in >>> mkfs.f2fs to specify total nid count as user wish. >> >> My concern is about a CTS failure in terms of # of free inodes. > > You mean testSaneInodes()? > > final long maxsize = stat.f_blocks * stat.f_frsize; > final long maxInodes = maxsize / 4096; > final long minsize = stat.f_bavail * stat.f_frsize; > final long minInodes = minsize / 32768; > > The range is about [1/8, 1], so our 20% threshold can just let it passed, > right? Yes, thanks for checking the codes. Let me play with this for some time. >>> >>> It simply triggers a panic, if kernel does not have the patch to detect the >>> feature. Hmm... >> >> Yes, because we have changed disk layout of nat/sit_version_bitmap in >> mkfs.f2fs, if kernel can not detect that, we will encounter panic simply. > > , which means we can't do this by default at least. Oh, right, we need to consider to keep backward compactibility for old kernel in mkfs.f2fs by default, what about adding a new option to enable this just for new kernel? Thanks, > >> >> Thanks, >> >>> >>> Thanks, >>> Thanks, > > Thanks, > >> >> Thanks, >> >>> >>> How do you think? >>> >>> Thanks, >>> > 512 3260640 100 20 > 1024 2684121682 38 > 2048 1468243244 76 > 4096 39004800120 150 > > After: > image_size(GB)nat_bitmap sit_bitmap nat_segment > sit_segment > 16256 64 8 2 > 32512 64 16 2 > 64960 128 30 4 > 128 1856192 58 6 > 256 3712320 116 10 Can we activate this, if size is larger than 256GB or something around that? Thanks, > 512 7424640 232 20 > 1024 14787 1216462 38 > 2048 29504 2432922 76 > 4096 59008 48001844150 > > Signed-off-by: Chao Yu> --- > v2: > - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit > bitmap. > fsck/f2fs.h| 19 +-- > fsck/resize.c | 35 +-- > include/f2fs_fs.h | 8 ++-- > lib/libf2fs.c
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 01/24, Chao Yu wrote: > On 2018/1/24 6:19, Jaegeuk Kim wrote: > > On 01/23, Jaegeuk Kim wrote: > >> On 01/23, Chao Yu wrote: > >>> On 2018/1/23 7:00, Jaegeuk Kim wrote: > On 01/17, Chao Yu wrote: > > Hi Jaegeuk, > > > > On 2018/1/17 8:47, Jaegeuk Kim wrote: > >> Hi Chao, > >> > >> On 01/15, Chao Yu wrote: > >>> Previously, our total node number (nat_bitmap) and total nat segment > >>> count > >>> will not monotonously increase along with image size, and max > >>> nat_bitmap size > >>> is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", > >>> it is > >>> with bad scalability when user wants to create more inode/node in > >>> larger image. > >>> > >>> So this patch tries to relieve the limitation, by default, limitting > >>> total nat > >>> entry number with 20% of total block number. > >>> > >>> Before: > >>> image_size(GB)nat_bitmap sit_bitmap nat_segment > >>> sit_segment > >>> 16383664 36 2 > >>> 32383664 72 2 > >>> 643772128 116 4 > >>> 128 3708192 114 6 > >>> 256 3580320 110 10 > > > > As you see, nat_segment count will reduce when image size increases > > starting from 64GB, that means nat segment count will not monotonously > > increase when image size is increasing, so it would be better to active > > this when image size is larger than 32GB? > > > > IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" > > : > > "free block" is about 1 : 4) would be better: > > a. It will be easy for user to predict nid count or nat segment count > > with > > fix-sized image; > > b. If user wants to reserve more nid count, we can support -N option in > > mkfs.f2fs to specify total nid count as user wish. > > My concern is about a CTS failure in terms of # of free inodes. > >>> > >>> You mean testSaneInodes()? > >>> > >>> final long maxsize = stat.f_blocks * stat.f_frsize; > >>> final long maxInodes = maxsize / 4096; > >>> final long minsize = stat.f_bavail * stat.f_frsize; > >>> final long minInodes = minsize / 32768; > >>> > >>> The range is about [1/8, 1], so our 20% threshold can just let it passed, > >>> right? > >> > >> Yes, thanks for checking the codes. Let me play with this for some time. > > > > It simply triggers a panic, if kernel does not have the patch to detect the > > feature. Hmm... > > Yes, because we have changed disk layout of nat/sit_version_bitmap in > mkfs.f2fs, if kernel can not detect that, we will encounter panic simply. , which means we can't do this by default at least. > > Thanks, > > > > > Thanks, > > > >> > >> Thanks, > >> > >>> > >>> Thanks, > >>> > > Thanks, > > > > > How do you think? > > > > Thanks, > > > >>> 512 3260640 100 20 > >>> 1024 2684121682 38 > >>> 2048 1468243244 76 > >>> 4096 39004800120 150 > >>> > >>> After: > >>> image_size(GB)nat_bitmap sit_bitmap nat_segment > >>> sit_segment > >>> 16256 64 8 2 > >>> 32512 64 16 2 > >>> 64960 128 30 4 > >>> 128 1856192 58 6 > >>> 256 3712320 116 10 > >> > >> Can we activate this, if size is larger than 256GB or something around > >> that? > >> > >> Thanks, > >> > >>> 512 7424640 232 20 > >>> 1024 14787 1216462 38 > >>> 2048 29504 2432922 76 > >>> 4096 59008 48001844150 > >>> > >>> Signed-off-by: Chao Yu> >>> --- > >>> v2: > >>> - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit > >>> bitmap. > >>> fsck/f2fs.h| 19 +-- > >>> fsck/resize.c | 35 +-- > >>> include/f2fs_fs.h | 8 ++-- > >>> lib/libf2fs.c | 1 + > >>> mkfs/f2fs_format.c | 45 +++-- > >>> 5 files changed, 60 insertions(+), 48 deletions(-) > >>> > >>> diff --git a/fsck/f2fs.h b/fsck/f2fs.h > >>> index
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 2018/1/24 6:19, Jaegeuk Kim wrote: > On 01/23, Jaegeuk Kim wrote: >> On 01/23, Chao Yu wrote: >>> On 2018/1/23 7:00, Jaegeuk Kim wrote: On 01/17, Chao Yu wrote: > Hi Jaegeuk, > > On 2018/1/17 8:47, Jaegeuk Kim wrote: >> Hi Chao, >> >> On 01/15, Chao Yu wrote: >>> Previously, our total node number (nat_bitmap) and total nat segment >>> count >>> will not monotonously increase along with image size, and max >>> nat_bitmap size >>> is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", >>> it is >>> with bad scalability when user wants to create more inode/node in >>> larger image. >>> >>> So this patch tries to relieve the limitation, by default, limitting >>> total nat >>> entry number with 20% of total block number. >>> >>> Before: >>> image_size(GB) nat_bitmap sit_bitmap nat_segment >>> sit_segment >>> 16 383664 36 2 >>> 32 383664 72 2 >>> 64 3772128 116 4 >>> 128 3708192 114 6 >>> 256 3580320 110 10 > > As you see, nat_segment count will reduce when image size increases > starting from 64GB, that means nat segment count will not monotonously > increase when image size is increasing, so it would be better to active > this when image size is larger than 32GB? > > IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : > "free block" is about 1 : 4) would be better: > a. It will be easy for user to predict nid count or nat segment count with > fix-sized image; > b. If user wants to reserve more nid count, we can support -N option in > mkfs.f2fs to specify total nid count as user wish. My concern is about a CTS failure in terms of # of free inodes. >>> >>> You mean testSaneInodes()? >>> >>> final long maxsize = stat.f_blocks * stat.f_frsize; >>> final long maxInodes = maxsize / 4096; >>> final long minsize = stat.f_bavail * stat.f_frsize; >>> final long minInodes = minsize / 32768; >>> >>> The range is about [1/8, 1], so our 20% threshold can just let it passed, >>> right? >> >> Yes, thanks for checking the codes. Let me play with this for some time. > > It simply triggers a panic, if kernel does not have the patch to detect the > feature. Hmm... Yes, because we have changed disk layout of nat/sit_version_bitmap in mkfs.f2fs, if kernel can not detect that, we will encounter panic simply. Thanks, > > Thanks, > >> >> Thanks, >> >>> >>> Thanks, >>> Thanks, > > How do you think? > > Thanks, > >>> 512 3260640 100 20 >>> 10242684121682 38 >>> 20481468243244 76 >>> 409639004800120 150 >>> >>> After: >>> image_size(GB) nat_bitmap sit_bitmap nat_segment >>> sit_segment >>> 16 256 64 8 2 >>> 32 512 64 16 2 >>> 64 960 128 30 4 >>> 128 1856192 58 6 >>> 256 3712320 116 10 >> >> Can we activate this, if size is larger than 256GB or something around >> that? >> >> Thanks, >> >>> 512 7424640 232 20 >>> 102414787 1216462 38 >>> 204829504 2432922 76 >>> 409659008 48001844150 >>> >>> Signed-off-by: Chao Yu>>> --- >>> v2: >>> - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit >>> bitmap. >>> fsck/f2fs.h| 19 +-- >>> fsck/resize.c | 35 +-- >>> include/f2fs_fs.h | 8 ++-- >>> lib/libf2fs.c | 1 + >>> mkfs/f2fs_format.c | 45 +++-- >>> 5 files changed, 60 insertions(+), 48 deletions(-) >>> >>> diff --git a/fsck/f2fs.h b/fsck/f2fs.h >>> index f5970d9dafc0..8a5ce365282d 100644 >>> --- a/fsck/f2fs.h >>> +++ b/fsck/f2fs.h >>> @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct >>> f2fs_node *node_blk) >>> return flag >> OFFSET_BIT_SHIFT; >>> } >>> >>> +static inline bool is_set_ckpt_flags(struct
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 01/23, Jaegeuk Kim wrote: > On 01/23, Chao Yu wrote: > > On 2018/1/23 7:00, Jaegeuk Kim wrote: > > > On 01/17, Chao Yu wrote: > > >> Hi Jaegeuk, > > >> > > >> On 2018/1/17 8:47, Jaegeuk Kim wrote: > > >>> Hi Chao, > > >>> > > >>> On 01/15, Chao Yu wrote: > > Previously, our total node number (nat_bitmap) and total nat segment > > count > > will not monotonously increase along with image size, and max > > nat_bitmap size > > is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", > > it is > > with bad scalability when user wants to create more inode/node in > > larger image. > > > > So this patch tries to relieve the limitation, by default, limitting > > total nat > > entry number with 20% of total block number. > > > > Before: > > image_size(GB) nat_bitmap sit_bitmap nat_segment > > sit_segment > > 16 383664 36 2 > > 32 383664 72 2 > > 64 3772128 116 4 > > 1283708192 114 6 > > 2563580320 110 10 > > >> > > >> As you see, nat_segment count will reduce when image size increases > > >> starting from 64GB, that means nat segment count will not monotonously > > >> increase when image size is increasing, so it would be better to active > > >> this when image size is larger than 32GB? > > >> > > >> IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : > > >> "free block" is about 1 : 4) would be better: > > >> a. It will be easy for user to predict nid count or nat segment count > > >> with > > >> fix-sized image; > > >> b. If user wants to reserve more nid count, we can support -N option in > > >> mkfs.f2fs to specify total nid count as user wish. > > > > > > My concern is about a CTS failure in terms of # of free inodes. > > > > You mean testSaneInodes()? > > > > final long maxsize = stat.f_blocks * stat.f_frsize; > > final long maxInodes = maxsize / 4096; > > final long minsize = stat.f_bavail * stat.f_frsize; > > final long minInodes = minsize / 32768; > > > > The range is about [1/8, 1], so our 20% threshold can just let it passed, > > right? > > Yes, thanks for checking the codes. Let me play with this for some time. It simply triggers a panic, if kernel does not have the patch to detect the feature. Hmm... Thanks, > > Thanks, > > > > > Thanks, > > > > > > > > Thanks, > > > > > >> > > >> How do you think? > > >> > > >> Thanks, > > >> > > 5123260640 100 20 > > 1024 2684121682 38 > > 2048 1468243244 76 > > 4096 39004800120 150 > > > > After: > > image_size(GB) nat_bitmap sit_bitmap nat_segment > > sit_segment > > 16 256 64 8 2 > > 32 512 64 16 2 > > 64 960 128 30 4 > > 1281856192 58 6 > > 2563712320 116 10 > > >>> > > >>> Can we activate this, if size is larger than 256GB or something around > > >>> that? > > >>> > > >>> Thanks, > > >>> > > 5127424640 232 20 > > 1024 14787 1216462 38 > > 2048 29504 2432922 76 > > 4096 59008 48001844150 > > > > Signed-off-by: Chao Yu> > --- > > v2: > > - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit > > bitmap. > > fsck/f2fs.h| 19 +-- > > fsck/resize.c | 35 +-- > > include/f2fs_fs.h | 8 ++-- > > lib/libf2fs.c | 1 + > > mkfs/f2fs_format.c | 45 +++-- > > 5 files changed, 60 insertions(+), 48 deletions(-) > > > > diff --git a/fsck/f2fs.h b/fsck/f2fs.h > > index f5970d9dafc0..8a5ce365282d 100644 > > --- a/fsck/f2fs.h > > +++ b/fsck/f2fs.h > > @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct > > f2fs_node *node_blk) > > return flag >> OFFSET_BIT_SHIFT; > > } > > > > +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, > > unsigned int f) > > +{ > > + unsigned int ckpt_flags =
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 01/23, Chao Yu wrote: > On 2018/1/23 7:00, Jaegeuk Kim wrote: > > On 01/17, Chao Yu wrote: > >> Hi Jaegeuk, > >> > >> On 2018/1/17 8:47, Jaegeuk Kim wrote: > >>> Hi Chao, > >>> > >>> On 01/15, Chao Yu wrote: > Previously, our total node number (nat_bitmap) and total nat segment > count > will not monotonously increase along with image size, and max nat_bitmap > size > is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it > is > with bad scalability when user wants to create more inode/node in larger > image. > > So this patch tries to relieve the limitation, by default, limitting > total nat > entry number with 20% of total block number. > > Before: > image_size(GB) nat_bitmap sit_bitmap nat_segment > sit_segment > 16 383664 36 2 > 32 383664 72 2 > 64 3772128 116 4 > 128 3708192 114 6 > 256 3580320 110 10 > >> > >> As you see, nat_segment count will reduce when image size increases > >> starting from 64GB, that means nat segment count will not monotonously > >> increase when image size is increasing, so it would be better to active > >> this when image size is larger than 32GB? > >> > >> IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : > >> "free block" is about 1 : 4) would be better: > >> a. It will be easy for user to predict nid count or nat segment count with > >> fix-sized image; > >> b. If user wants to reserve more nid count, we can support -N option in > >> mkfs.f2fs to specify total nid count as user wish. > > > > My concern is about a CTS failure in terms of # of free inodes. > > You mean testSaneInodes()? > > final long maxsize = stat.f_blocks * stat.f_frsize; > final long maxInodes = maxsize / 4096; > final long minsize = stat.f_bavail * stat.f_frsize; > final long minInodes = minsize / 32768; > > The range is about [1/8, 1], so our 20% threshold can just let it passed, > right? Yes, thanks for checking the codes. Let me play with this for some time. Thanks, > > Thanks, > > > > > Thanks, > > > >> > >> How do you think? > >> > >> Thanks, > >> > 512 3260640 100 20 > 1024 2684121682 38 > 2048 1468243244 76 > 4096 39004800120 150 > > After: > image_size(GB) nat_bitmap sit_bitmap nat_segment > sit_segment > 16 256 64 8 2 > 32 512 64 16 2 > 64 960 128 30 4 > 128 1856192 58 6 > 256 3712320 116 10 > >>> > >>> Can we activate this, if size is larger than 256GB or something around > >>> that? > >>> > >>> Thanks, > >>> > 512 7424640 232 20 > 1024 14787 1216462 38 > 2048 29504 2432922 76 > 4096 59008 48001844150 > > Signed-off-by: Chao Yu> --- > v2: > - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit > bitmap. > fsck/f2fs.h| 19 +-- > fsck/resize.c | 35 +-- > include/f2fs_fs.h | 8 ++-- > lib/libf2fs.c | 1 + > mkfs/f2fs_format.c | 45 +++-- > 5 files changed, 60 insertions(+), 48 deletions(-) > > diff --git a/fsck/f2fs.h b/fsck/f2fs.h > index f5970d9dafc0..8a5ce365282d 100644 > --- a/fsck/f2fs.h > +++ b/fsck/f2fs.h > @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct > f2fs_node *node_blk) > return flag >> OFFSET_BIT_SHIFT; > } > > +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, > unsigned int f) > +{ > +unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); > +return ckpt_flags & f ? 1 : 0; > +} > + > static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int > flag) > { > struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); > @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct > f2fs_sb_info *sbi,
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 2018/1/23 7:00, Jaegeuk Kim wrote: > On 01/17, Chao Yu wrote: >> Hi Jaegeuk, >> >> On 2018/1/17 8:47, Jaegeuk Kim wrote: >>> Hi Chao, >>> >>> On 01/15, Chao Yu wrote: Previously, our total node number (nat_bitmap) and total nat segment count will not monotonously increase along with image size, and max nat_bitmap size is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is with bad scalability when user wants to create more inode/node in larger image. So this patch tries to relieve the limitation, by default, limitting total nat entry number with 20% of total block number. Before: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 383664 36 2 32 383664 72 2 64 3772128 116 4 1283708192 114 6 2563580320 110 10 >> >> As you see, nat_segment count will reduce when image size increases >> starting from 64GB, that means nat segment count will not monotonously >> increase when image size is increasing, so it would be better to active >> this when image size is larger than 32GB? >> >> IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : >> "free block" is about 1 : 4) would be better: >> a. It will be easy for user to predict nid count or nat segment count with >> fix-sized image; >> b. If user wants to reserve more nid count, we can support -N option in >> mkfs.f2fs to specify total nid count as user wish. > > My concern is about a CTS failure in terms of # of free inodes. You mean testSaneInodes()? final long maxsize = stat.f_blocks * stat.f_frsize; final long maxInodes = maxsize / 4096; final long minsize = stat.f_bavail * stat.f_frsize; final long minInodes = minsize / 32768; The range is about [1/8, 1], so our 20% threshold can just let it passed, right? Thanks, > > Thanks, > >> >> How do you think? >> >> Thanks, >> 5123260640 100 20 1024 2684121682 38 2048 1468243244 76 4096 39004800120 150 After: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 256 64 8 2 32 512 64 16 2 64 960 128 30 4 1281856192 58 6 2563712320 116 10 >>> >>> Can we activate this, if size is larger than 256GB or something around that? >>> >>> Thanks, >>> 5127424640 232 20 1024 14787 1216462 38 2048 29504 2432922 76 4096 59008 48001844150 Signed-off-by: Chao Yu--- v2: - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit bitmap. fsck/f2fs.h| 19 +-- fsck/resize.c | 35 +-- include/f2fs_fs.h | 8 ++-- lib/libf2fs.c | 1 + mkfs/f2fs_format.c | 45 +++-- 5 files changed, 60 insertions(+), 48 deletions(-) diff --git a/fsck/f2fs.h b/fsck/f2fs.h index f5970d9dafc0..8a5ce365282d 100644 --- a/fsck/f2fs.h +++ b/fsck/f2fs.h @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct f2fs_node *node_blk) return flag >> OFFSET_BIT_SHIFT; } +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f) +{ + unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); + return ckpt_flags & f ? 1 : 0; +} + static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); int offset; + + if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { + offset = (flag == SIT_BITMAP) ? + le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0; + return >sit_nat_version_bitmap + offset; + } + if
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
On 01/17, Chao Yu wrote: > Hi Jaegeuk, > > On 2018/1/17 8:47, Jaegeuk Kim wrote: > > Hi Chao, > > > > On 01/15, Chao Yu wrote: > >> Previously, our total node number (nat_bitmap) and total nat segment count > >> will not monotonously increase along with image size, and max nat_bitmap > >> size > >> is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is > >> with bad scalability when user wants to create more inode/node in larger > >> image. > >> > >> So this patch tries to relieve the limitation, by default, limitting total > >> nat > >> entry number with 20% of total block number. > >> > >> Before: > >> image_size(GB) nat_bitmap sit_bitmap nat_segment > >> sit_segment > >> 16 383664 36 2 > >> 32 383664 72 2 > >> 64 3772128 116 4 > >> 1283708192 114 6 > >> 2563580320 110 10 > > As you see, nat_segment count will reduce when image size increases > starting from 64GB, that means nat segment count will not monotonously > increase when image size is increasing, so it would be better to active > this when image size is larger than 32GB? > > IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : > "free block" is about 1 : 4) would be better: > a. It will be easy for user to predict nid count or nat segment count with > fix-sized image; > b. If user wants to reserve more nid count, we can support -N option in > mkfs.f2fs to specify total nid count as user wish. My concern is about a CTS failure in terms of # of free inodes. Thanks, > > How do you think? > > Thanks, > > >> 5123260640 100 20 > >> 1024 2684121682 38 > >> 2048 1468243244 76 > >> 4096 39004800120 150 > >> > >> After: > >> image_size(GB) nat_bitmap sit_bitmap nat_segment > >> sit_segment > >> 16 256 64 8 2 > >> 32 512 64 16 2 > >> 64 960 128 30 4 > >> 1281856192 58 6 > >> 2563712320 116 10 > > > > Can we activate this, if size is larger than 256GB or something around that? > > > > Thanks, > > > >> 5127424640 232 20 > >> 1024 14787 1216462 38 > >> 2048 29504 2432922 76 > >> 4096 59008 48001844150 > >> > >> Signed-off-by: Chao Yu> >> --- > >> v2: > >> - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit > >> bitmap. > >> fsck/f2fs.h| 19 +-- > >> fsck/resize.c | 35 +-- > >> include/f2fs_fs.h | 8 ++-- > >> lib/libf2fs.c | 1 + > >> mkfs/f2fs_format.c | 45 +++-- > >> 5 files changed, 60 insertions(+), 48 deletions(-) > >> > >> diff --git a/fsck/f2fs.h b/fsck/f2fs.h > >> index f5970d9dafc0..8a5ce365282d 100644 > >> --- a/fsck/f2fs.h > >> +++ b/fsck/f2fs.h > >> @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct > >> f2fs_node *node_blk) > >>return flag >> OFFSET_BIT_SHIFT; > >> } > >> > >> +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned > >> int f) > >> +{ > >> + unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); > >> + return ckpt_flags & f ? 1 : 0; > >> +} > >> + > >> static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int > >> flag) > >> { > >>struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); > >> @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info > >> *sbi, int flag) > >> { > >>struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); > >>int offset; > >> + > >> + if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { > >> + offset = (flag == SIT_BITMAP) ? > >> + le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0; > >> + return >sit_nat_version_bitmap + offset; > >> + } > >> + > >>if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) { > >>if (flag == NAT_BITMAP) > >>return >sit_nat_version_bitmap; > >> @@ -268,12 +281,6 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info > >> *sbi, int flag) > >>} > >> } > >> > >> -static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned > >> int f) > >> -{ > >> - unsigned int ckpt_flags =
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
Hi Xiang, On 2018/1/17 11:35, Gao Xiang wrote: > Hi Chao Yu, > > > On 2018/1/17 11:15, Chao Yu wrote: >> Hi Jaegeuk, >> >> On 2018/1/17 8:47, Jaegeuk Kim wrote: >>> Hi Chao, >>> >>> On 01/15, Chao Yu wrote: Previously, our total node number (nat_bitmap) and total nat segment count will not monotonously increase along with image size, and max nat_bitmap size is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is with bad scalability when user wants to create more inode/node in larger image. So this patch tries to relieve the limitation, by default, limitting total nat entry number with 20% of total block number. Before: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 383664 36 2 32 383664 72 2 64 3772128 116 4 1283708192 114 6 2563580320 110 10 >> As you see, nat_segment count will reduce when image size increases >> starting from 64GB, that means nat segment count will not monotonously >> increase when image size is increasing, so it would be better to active >> this when image size is larger than 32GB? >> >> IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : >> "free block" is about 1 : 4) would be better: >> a. It will be easy for user to predict nid count or nat segment count with >> fix-sized image; >> b. If user wants to reserve more nid count, we can support -N option in >> mkfs.f2fs to specify total nid count as user wish. > I agree bacause it is weird if nat segment count is not monotonously > increased, especially for server users, and how about modifying like this? > 32GB~xxxGB(if (max_sit_bitmap_size + max_nat_bitmap_size ~ (<=) > MAX_BITMAP_SIZE_IN_CKPT) ) --- use the original nat calculation version; > >xxx GB --- use CP_LARGE_NAT_BITMAP_FLAG and the introduced ratio. > if user-defined -N is specified, use the nid count ( or ratio ?) instead > of the above calculation. IMO, a little bit complicated from view of both developer and user. How about waiting for Jaegeuk's opinion? Thanks, > > Thanks, >> >> How do you think? >> >> Thanks, >> 5123260640 100 20 1024 2684121682 38 2048 1468243244 76 4096 39004800120 150 After: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 256 64 8 2 32 512 64 16 2 64 960 128 30 4 1281856192 58 6 2563712320 116 10 >>> Can we activate this, if size is larger than 256GB or something around that? >>> >>> Thanks, >>> 5127424640 232 20 1024 14787 1216462 38 2048 29504 2432922 76 4096 59008 48001844150 Signed-off-by: Chao Yu--- v2: - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit bitmap. fsck/f2fs.h| 19 +-- fsck/resize.c | 35 +-- include/f2fs_fs.h | 8 ++-- lib/libf2fs.c | 1 + mkfs/f2fs_format.c | 45 +++-- 5 files changed, 60 insertions(+), 48 deletions(-) diff --git a/fsck/f2fs.h b/fsck/f2fs.h index f5970d9dafc0..8a5ce365282d 100644 --- a/fsck/f2fs.h +++ b/fsck/f2fs.h @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct f2fs_node *node_blk) return flag >> OFFSET_BIT_SHIFT; } +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f) +{ + unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); + return ckpt_flags & f ? 1 : 0; +} + static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); int offset; + + if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { +
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
Hi Chao Yu, On 2018/1/17 11:15, Chao Yu wrote: Hi Jaegeuk, On 2018/1/17 8:47, Jaegeuk Kim wrote: Hi Chao, On 01/15, Chao Yu wrote: Previously, our total node number (nat_bitmap) and total nat segment count will not monotonously increase along with image size, and max nat_bitmap size is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is with bad scalability when user wants to create more inode/node in larger image. So this patch tries to relieve the limitation, by default, limitting total nat entry number with 20% of total block number. Before: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 383664 36 2 32 383664 72 2 64 3772128 116 4 128 3708192 114 6 256 3580320 110 10 As you see, nat_segment count will reduce when image size increases starting from 64GB, that means nat segment count will not monotonously increase when image size is increasing, so it would be better to active this when image size is larger than 32GB? IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : "free block" is about 1 : 4) would be better: a. It will be easy for user to predict nid count or nat segment count with fix-sized image; b. If user wants to reserve more nid count, we can support -N option in mkfs.f2fs to specify total nid count as user wish. I agree bacause it is weird if nat segment count is not monotonously increased, especially for server users, and how about modifying like this? 32GB~xxxGB(if (max_sit_bitmap_size + max_nat_bitmap_size ~ (<=) MAX_BITMAP_SIZE_IN_CKPT) ) --- use the original nat calculation version; >xxx GB --- use CP_LARGE_NAT_BITMAP_FLAG and the introduced ratio. if user-defined -N is specified, use the nid count ( or ratio ?) instead of the above calculation. Thanks, How do you think? Thanks, 512 3260640 100 20 10242684121682 38 20481468243244 76 409639004800120 150 After: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 256 64 8 2 32 512 64 16 2 64 960 128 30 4 128 1856192 58 6 256 3712320 116 10 Can we activate this, if size is larger than 256GB or something around that? Thanks, 512 7424640 232 20 102414787 1216462 38 204829504 2432922 76 409659008 48001844150 Signed-off-by: Chao Yu--- v2: - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit bitmap. fsck/f2fs.h| 19 +-- fsck/resize.c | 35 +-- include/f2fs_fs.h | 8 ++-- lib/libf2fs.c | 1 + mkfs/f2fs_format.c | 45 +++-- 5 files changed, 60 insertions(+), 48 deletions(-) diff --git a/fsck/f2fs.h b/fsck/f2fs.h index f5970d9dafc0..8a5ce365282d 100644 --- a/fsck/f2fs.h +++ b/fsck/f2fs.h @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct f2fs_node *node_blk) return flag >> OFFSET_BIT_SHIFT; } +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f) +{ + unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); + return ckpt_flags & f ? 1 : 0; +} + static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); int offset; + + if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { + offset = (flag == SIT_BITMAP) ? + le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0; + return >sit_nat_version_bitmap + offset; + } + if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) { if (flag == NAT_BITMAP) return >sit_nat_version_bitmap; @@ -268,12 +281,6 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) } } -static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f) -{ - unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); - return ckpt_flags
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
Hi Jaegeuk, On 2018/1/17 8:47, Jaegeuk Kim wrote: > Hi Chao, > > On 01/15, Chao Yu wrote: >> Previously, our total node number (nat_bitmap) and total nat segment count >> will not monotonously increase along with image size, and max nat_bitmap size >> is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is >> with bad scalability when user wants to create more inode/node in larger >> image. >> >> So this patch tries to relieve the limitation, by default, limitting total >> nat >> entry number with 20% of total block number. >> >> Before: >> image_size(GB) nat_bitmap sit_bitmap nat_segment >> sit_segment >> 16 383664 36 2 >> 32 383664 72 2 >> 64 3772128 116 4 >> 128 3708192 114 6 >> 256 3580320 110 10 As you see, nat_segment count will reduce when image size increases starting from 64GB, that means nat segment count will not monotonously increase when image size is increasing, so it would be better to active this when image size is larger than 32GB? IMO, configuring basic nid ratio to fixed value like ext4 ("free inode" : "free block" is about 1 : 4) would be better: a. It will be easy for user to predict nid count or nat segment count with fix-sized image; b. If user wants to reserve more nid count, we can support -N option in mkfs.f2fs to specify total nid count as user wish. How do you think? Thanks, >> 512 3260640 100 20 >> 1024 2684121682 38 >> 2048 1468243244 76 >> 4096 39004800120 150 >> >> After: >> image_size(GB) nat_bitmap sit_bitmap nat_segment >> sit_segment >> 16 256 64 8 2 >> 32 512 64 16 2 >> 64 960 128 30 4 >> 128 1856192 58 6 >> 256 3712320 116 10 > > Can we activate this, if size is larger than 256GB or something around that? > > Thanks, > >> 512 7424640 232 20 >> 1024 14787 1216462 38 >> 2048 29504 2432922 76 >> 4096 59008 48001844150 >> >> Signed-off-by: Chao Yu>> --- >> v2: >> - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit bitmap. >> fsck/f2fs.h| 19 +-- >> fsck/resize.c | 35 +-- >> include/f2fs_fs.h | 8 ++-- >> lib/libf2fs.c | 1 + >> mkfs/f2fs_format.c | 45 +++-- >> 5 files changed, 60 insertions(+), 48 deletions(-) >> >> diff --git a/fsck/f2fs.h b/fsck/f2fs.h >> index f5970d9dafc0..8a5ce365282d 100644 >> --- a/fsck/f2fs.h >> +++ b/fsck/f2fs.h >> @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct f2fs_node >> *node_blk) >> return flag >> OFFSET_BIT_SHIFT; >> } >> >> +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned >> int f) >> +{ >> +unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); >> +return ckpt_flags & f ? 1 : 0; >> +} >> + >> static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int >> flag) >> { >> struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); >> @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info >> *sbi, int flag) >> { >> struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); >> int offset; >> + >> +if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { >> +offset = (flag == SIT_BITMAP) ? >> +le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0; >> +return >sit_nat_version_bitmap + offset; >> +} >> + >> if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) { >> if (flag == NAT_BITMAP) >> return >sit_nat_version_bitmap; >> @@ -268,12 +281,6 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info >> *sbi, int flag) >> } >> } >> >> -static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned >> int f) >> -{ >> -unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); >> -return ckpt_flags & f ? 1 : 0; >> -} >> - >> static inline block_t __start_cp_addr(struct f2fs_sb_info *sbi) >> { >> block_t start_addr = le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_blkaddr); >> diff --git a/fsck/resize.c b/fsck/resize.c >> index 143ad5d3c0a1..f3547c86f351 100644 >> --- a/fsck/resize.c >> +++ b/fsck/resize.c >> @@ -13,10 +13,10 @@ static
Re: [f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
Hi Chao, On 01/15, Chao Yu wrote: > Previously, our total node number (nat_bitmap) and total nat segment count > will not monotonously increase along with image size, and max nat_bitmap size > is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is > with bad scalability when user wants to create more inode/node in larger > image. > > So this patch tries to relieve the limitation, by default, limitting total nat > entry number with 20% of total block number. > > Before: > image_size(GB)nat_bitmap sit_bitmap nat_segment > sit_segment > 16383664 36 2 > 32383664 72 2 > 643772128 116 4 > 128 3708192 114 6 > 256 3580320 110 10 > 512 3260640 100 20 > 1024 2684121682 38 > 2048 1468243244 76 > 4096 39004800120 150 > > After: > image_size(GB)nat_bitmap sit_bitmap nat_segment > sit_segment > 16256 64 8 2 > 32512 64 16 2 > 64960 128 30 4 > 128 1856192 58 6 > 256 3712320 116 10 Can we activate this, if size is larger than 256GB or something around that? Thanks, > 512 7424640 232 20 > 1024 14787 1216462 38 > 2048 29504 2432922 76 > 4096 59008 48001844150 > > Signed-off-by: Chao Yu> --- > v2: > - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit bitmap. > fsck/f2fs.h| 19 +-- > fsck/resize.c | 35 +-- > include/f2fs_fs.h | 8 ++-- > lib/libf2fs.c | 1 + > mkfs/f2fs_format.c | 45 +++-- > 5 files changed, 60 insertions(+), 48 deletions(-) > > diff --git a/fsck/f2fs.h b/fsck/f2fs.h > index f5970d9dafc0..8a5ce365282d 100644 > --- a/fsck/f2fs.h > +++ b/fsck/f2fs.h > @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct f2fs_node > *node_blk) > return flag >> OFFSET_BIT_SHIFT; > } > > +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned > int f) > +{ > + unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); > + return ckpt_flags & f ? 1 : 0; > +} > + > static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int flag) > { > struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); > @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info > *sbi, int flag) > { > struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); > int offset; > + > + if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { > + offset = (flag == SIT_BITMAP) ? > + le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0; > + return >sit_nat_version_bitmap + offset; > + } > + > if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) { > if (flag == NAT_BITMAP) > return >sit_nat_version_bitmap; > @@ -268,12 +281,6 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info > *sbi, int flag) > } > } > > -static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned > int f) > -{ > - unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); > - return ckpt_flags & f ? 1 : 0; > -} > - > static inline block_t __start_cp_addr(struct f2fs_sb_info *sbi) > { > block_t start_addr = le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_blkaddr); > diff --git a/fsck/resize.c b/fsck/resize.c > index 143ad5d3c0a1..f3547c86f351 100644 > --- a/fsck/resize.c > +++ b/fsck/resize.c > @@ -13,10 +13,10 @@ static int get_new_sb(struct f2fs_super_block *sb) > { > u_int32_t zone_size_bytes, zone_align_start_offset; > u_int32_t blocks_for_sit, blocks_for_nat, blocks_for_ssa; > - u_int32_t sit_segments, diff, total_meta_segments; > + u_int32_t sit_segments, nat_segments, diff, total_meta_segments; > u_int32_t total_valid_blks_available; > u_int32_t sit_bitmap_size, max_sit_bitmap_size; > - u_int32_t max_nat_bitmap_size, max_nat_segments; > + u_int32_t max_nat_bitmap_size; > u_int32_t segment_size_bytes = 1 << (get_sb(log_blocksize) + > get_sb(log_blocks_per_seg)); > u_int32_t blks_per_seg = 1 << get_sb(log_blocks_per_seg); > @@ -47,7 +47,15 @@ static int
[f2fs-dev] [PATCH v2] mkfs.f2fs: expand scalability of nat bitmap
Previously, our total node number (nat_bitmap) and total nat segment count will not monotonously increase along with image size, and max nat_bitmap size is limited by "CHECKSUM_OFFSET - sizeof(struct f2fs_checkpoint) + 1", it is with bad scalability when user wants to create more inode/node in larger image. So this patch tries to relieve the limitation, by default, limitting total nat entry number with 20% of total block number. Before: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 383664 36 2 32 383664 72 2 64 3772128 116 4 128 3708192 114 6 256 3580320 110 10 512 3260640 100 20 10242684121682 38 20481468243244 76 409639004800120 150 After: image_size(GB) nat_bitmap sit_bitmap nat_segment sit_segment 16 256 64 8 2 32 512 64 16 2 64 960 128 30 4 128 1856192 58 6 256 3712320 116 10 512 7424640 232 20 102414787 1216462 38 204829504 2432922 76 409659008 48001844150 Signed-off-by: Chao Yu--- v2: - add CP_LARGE_NAT_BITMAP_FLAG flag to indicate new layout of nat/sit bitmap. fsck/f2fs.h| 19 +-- fsck/resize.c | 35 +-- include/f2fs_fs.h | 8 ++-- lib/libf2fs.c | 1 + mkfs/f2fs_format.c | 45 +++-- 5 files changed, 60 insertions(+), 48 deletions(-) diff --git a/fsck/f2fs.h b/fsck/f2fs.h index f5970d9dafc0..8a5ce365282d 100644 --- a/fsck/f2fs.h +++ b/fsck/f2fs.h @@ -239,6 +239,12 @@ static inline unsigned int ofs_of_node(struct f2fs_node *node_blk) return flag >> OFFSET_BIT_SHIFT; } +static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f) +{ + unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); + return ckpt_flags & f ? 1 : 0; +} + static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); @@ -256,6 +262,13 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) { struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); int offset; + + if (is_set_ckpt_flags(ckpt, CP_LARGE_NAT_BITMAP_FLAG)) { + offset = (flag == SIT_BITMAP) ? + le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0; + return >sit_nat_version_bitmap + offset; + } + if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) { if (flag == NAT_BITMAP) return >sit_nat_version_bitmap; @@ -268,12 +281,6 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) } } -static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f) -{ - unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags); - return ckpt_flags & f ? 1 : 0; -} - static inline block_t __start_cp_addr(struct f2fs_sb_info *sbi) { block_t start_addr = le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_blkaddr); diff --git a/fsck/resize.c b/fsck/resize.c index 143ad5d3c0a1..f3547c86f351 100644 --- a/fsck/resize.c +++ b/fsck/resize.c @@ -13,10 +13,10 @@ static int get_new_sb(struct f2fs_super_block *sb) { u_int32_t zone_size_bytes, zone_align_start_offset; u_int32_t blocks_for_sit, blocks_for_nat, blocks_for_ssa; - u_int32_t sit_segments, diff, total_meta_segments; + u_int32_t sit_segments, nat_segments, diff, total_meta_segments; u_int32_t total_valid_blks_available; u_int32_t sit_bitmap_size, max_sit_bitmap_size; - u_int32_t max_nat_bitmap_size, max_nat_segments; + u_int32_t max_nat_bitmap_size; u_int32_t segment_size_bytes = 1 << (get_sb(log_blocksize) + get_sb(log_blocks_per_seg)); u_int32_t blks_per_seg = 1 << get_sb(log_blocks_per_seg); @@ -47,7 +47,15 @@ static int get_new_sb(struct f2fs_super_block *sb) get_sb(segment_count_sit))) * blks_per_seg; blocks_for_nat = SIZE_ALIGN(total_valid_blks_available, NAT_ENTRY_PER_BLOCK); - set_sb(segment_count_nat,