Re: [PATCH] erofs: support adjust lz4 history window size
On Tue, Feb 23, 2021 at 10:03:59AM +0800, Huang Jianan wrote: > Hi Xiang, > > On 2021/2/22 12:44, Gao Xiang wrote: > > Hi Jianan, > > > > On Thu, Feb 18, 2021 at 08:00:49PM +0800, Huang Jianan via Linux-erofs > > wrote: > > > From: huangjianan > > > > > > lz4 uses LZ4_DISTANCE_MAX to record history preservation. When > > > using rolling decompression, a block with a higher compression > > > ratio will cause a larger memory allocation (up to 64k). It may > > > cause a large resource burden in extreme cases on devices with > > > small memory and a large number of concurrent IOs. So appropriately > > > reducing this value can improve performance. > > > > > > Decreasing this value will reduce the compression ratio (except > > > when input_size > > currently only supports 4k output, reducing this value will not > > > significantly reduce the compression benefits. > > > > > > Signed-off-by: Huang Jianan > > > Signed-off-by: Guo Weichao > > > --- > > > fs/erofs/decompressor.c | 13 + > > > fs/erofs/erofs_fs.h | 3 ++- > > > fs/erofs/internal.h | 3 +++ > > > fs/erofs/super.c| 3 +++ > > > 4 files changed, 17 insertions(+), 5 deletions(-) > > > > > > diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c > > > index 1cb1ffd10569..94ae56b3ff71 100644 > > > --- a/fs/erofs/decompressor.c > > > +++ b/fs/erofs/decompressor.c > > > @@ -36,22 +36,27 @@ static int z_erofs_lz4_prepare_destpages(struct > > > z_erofs_decompress_req *rq, > > > struct page *availables[LZ4_MAX_DISTANCE_PAGES] = { NULL }; > > > unsigned long bounced[DIV_ROUND_UP(LZ4_MAX_DISTANCE_PAGES, > > > BITS_PER_LONG)] = { 0 }; > > > + unsigned int lz4_distance_pages = LZ4_MAX_DISTANCE_PAGES; > > > void *kaddr = NULL; > > > unsigned int i, j, top; > > > + if (EROFS_SB(rq->sb)->compr_alg) > > > + lz4_distance_pages = DIV_ROUND_UP(EROFS_SB(rq->sb)->compr_alg, > > > + PAGE_SIZE) + 1; > > > + > > Thanks for your patch, I agree that will reduce runtime memory > > footpoint. and keep max sliding window ondisk in bytes (rather > > than in blocks) is better., but could we calculate lz4_distance_pages > > ahead when reading super_block? > Thanks for suggestion, i will update it soon. > > Also, in the next cycle, I'd like to introduce a bitmap for available > > algorithms (maximum 16-bit) for the next LZMA algorithm, and for each > > available algorithm introduces an on-disk variable-array like below: > > bitmap(16-bit)2 1 0 > > ... LZMALZ4 > > __le16 compr_opt_off; /* get the opt array start offset > > (I think also in 4-byte) */ > > > > compr alg 0 (lz4) __le16 alg_opt_size; > > /* next opt off = roundup(off + alg_opt_size, 4); */ > > __le16 lz4_max_distance; > > > > /* 4-byte aligned */ > > compr alg x (if available) u8 alg_opt_size; > > ... > > > > ... > > > > When reading sb, first, it scans the whole bitmap, and get all the > > available algorithms in the image at once. And then read such compr > > opts one-by-one. > > > > Do you have some interest and extra time to implement it? :) That > > makes me work less since I'm debugging mbpcluster compression now... > > Sounds good, I will try to do this part of the work. Yeah, but it seems to be part of the next LZMA algorithm patchset (with a new brand new INCOMPET feature). I think we could introduce a __le16 lz4_max_distance field from sb reserved for now as a simple backporting solution (since we only use < 64kb sliding window, so the image would be forward compatibility with old kernels. 0 means 64kb sliding window, otherwise it will < 64kb.) And with the new INCOMPAT_COMPR_OPT feature, lz4_max_distance field will be turned into compr_opt_off instead. And variable-array will be used then. So could you revise the patchset as above? Thanks! Thanks, Gao Xiang > > Thanks, > > Jianan > > > Thanks, > > Gao Xiang > > >
Re: [PATCH] erofs: support adjust lz4 history window size
Hi Xiang, On 2021/2/22 12:44, Gao Xiang wrote: Hi Jianan, On Thu, Feb 18, 2021 at 08:00:49PM +0800, Huang Jianan via Linux-erofs wrote: From: huangjianan lz4 uses LZ4_DISTANCE_MAX to record history preservation. When using rolling decompression, a block with a higher compression ratio will cause a larger memory allocation (up to 64k). It may cause a large resource burden in extreme cases on devices with small memory and a large number of concurrent IOs. So appropriately reducing this value can improve performance. Decreasing this value will reduce the compression ratio (except when input_size Signed-off-by: Guo Weichao --- fs/erofs/decompressor.c | 13 + fs/erofs/erofs_fs.h | 3 ++- fs/erofs/internal.h | 3 +++ fs/erofs/super.c| 3 +++ 4 files changed, 17 insertions(+), 5 deletions(-) diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c index 1cb1ffd10569..94ae56b3ff71 100644 --- a/fs/erofs/decompressor.c +++ b/fs/erofs/decompressor.c @@ -36,22 +36,27 @@ static int z_erofs_lz4_prepare_destpages(struct z_erofs_decompress_req *rq, struct page *availables[LZ4_MAX_DISTANCE_PAGES] = { NULL }; unsigned long bounced[DIV_ROUND_UP(LZ4_MAX_DISTANCE_PAGES, BITS_PER_LONG)] = { 0 }; + unsigned int lz4_distance_pages = LZ4_MAX_DISTANCE_PAGES; void *kaddr = NULL; unsigned int i, j, top; + if (EROFS_SB(rq->sb)->compr_alg) + lz4_distance_pages = DIV_ROUND_UP(EROFS_SB(rq->sb)->compr_alg, + PAGE_SIZE) + 1; + Thanks for your patch, I agree that will reduce runtime memory footpoint. and keep max sliding window ondisk in bytes (rather than in blocks) is better., but could we calculate lz4_distance_pages ahead when reading super_block? Thanks for suggestion, i will update it soon. Also, in the next cycle, I'd like to introduce a bitmap for available algorithms (maximum 16-bit) for the next LZMA algorithm, and for each available algorithm introduces an on-disk variable-array like below: bitmap(16-bit)2 1 0 ... LZMALZ4 __le16 compr_opt_off; /* get the opt array start offset (I think also in 4-byte) */ compr alg 0 (lz4) __le16 alg_opt_size; /* next opt off = roundup(off + alg_opt_size, 4); */ __le16 lz4_max_distance; /* 4-byte aligned */ compr alg x (if available) u8 alg_opt_size; ... ... When reading sb, first, it scans the whole bitmap, and get all the available algorithms in the image at once. And then read such compr opts one-by-one. Do you have some interest and extra time to implement it? :) That makes me work less since I'm debugging mbpcluster compression now... Sounds good, I will try to do this part of the work. Thanks, Jianan Thanks, Gao Xiang
Re: [PATCH] erofs: support adjust lz4 history window size
Hi Jianan, On Thu, Feb 18, 2021 at 08:00:49PM +0800, Huang Jianan via Linux-erofs wrote: > From: huangjianan > > lz4 uses LZ4_DISTANCE_MAX to record history preservation. When > using rolling decompression, a block with a higher compression > ratio will cause a larger memory allocation (up to 64k). It may > cause a large resource burden in extreme cases on devices with > small memory and a large number of concurrent IOs. So appropriately > reducing this value can improve performance. > > Decreasing this value will reduce the compression ratio (except > when input_size currently only supports 4k output, reducing this value will not > significantly reduce the compression benefits. > > Signed-off-by: Huang Jianan > Signed-off-by: Guo Weichao > --- > fs/erofs/decompressor.c | 13 + > fs/erofs/erofs_fs.h | 3 ++- > fs/erofs/internal.h | 3 +++ > fs/erofs/super.c| 3 +++ > 4 files changed, 17 insertions(+), 5 deletions(-) > > diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c > index 1cb1ffd10569..94ae56b3ff71 100644 > --- a/fs/erofs/decompressor.c > +++ b/fs/erofs/decompressor.c > @@ -36,22 +36,27 @@ static int z_erofs_lz4_prepare_destpages(struct > z_erofs_decompress_req *rq, > struct page *availables[LZ4_MAX_DISTANCE_PAGES] = { NULL }; > unsigned long bounced[DIV_ROUND_UP(LZ4_MAX_DISTANCE_PAGES, > BITS_PER_LONG)] = { 0 }; > + unsigned int lz4_distance_pages = LZ4_MAX_DISTANCE_PAGES; > void *kaddr = NULL; > unsigned int i, j, top; > > + if (EROFS_SB(rq->sb)->compr_alg) > + lz4_distance_pages = DIV_ROUND_UP(EROFS_SB(rq->sb)->compr_alg, > + PAGE_SIZE) + 1; > + Thanks for your patch, I agree that will reduce runtime memory footpoint. and keep max sliding window ondisk in bytes (rather than in blocks) is better., but could we calculate lz4_distance_pages ahead when reading super_block? Also, in the next cycle, I'd like to introduce a bitmap for available algorithms (maximum 16-bit) for the next LZMA algorithm, and for each available algorithm introduces an on-disk variable-array like below: bitmap(16-bit)2 1 0 ... LZMALZ4 __le16 compr_opt_off; /* get the opt array start offset (I think also in 4-byte) */ compr alg 0 (lz4) __le16 alg_opt_size; /* next opt off = roundup(off + alg_opt_size, 4); */ __le16 lz4_max_distance; /* 4-byte aligned */ compr alg x (if available) u8 alg_opt_size; ... ... When reading sb, first, it scans the whole bitmap, and get all the available algorithms in the image at once. And then read such compr opts one-by-one. Do you have some interest and extra time to implement it? :) That makes me work less since I'm debugging mbpcluster compression now... Thanks, Gao Xiang
[PATCH] erofs: support adjust lz4 history window size
From: huangjianan lz4 uses LZ4_DISTANCE_MAX to record history preservation. When using rolling decompression, a block with a higher compression ratio will cause a larger memory allocation (up to 64k). It may cause a large resource burden in extreme cases on devices with small memory and a large number of concurrent IOs. So appropriately reducing this value can improve performance. Decreasing this value will reduce the compression ratio (except when input_size Signed-off-by: Guo Weichao --- fs/erofs/decompressor.c | 13 + fs/erofs/erofs_fs.h | 3 ++- fs/erofs/internal.h | 3 +++ fs/erofs/super.c| 3 +++ 4 files changed, 17 insertions(+), 5 deletions(-) diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c index 1cb1ffd10569..94ae56b3ff71 100644 --- a/fs/erofs/decompressor.c +++ b/fs/erofs/decompressor.c @@ -36,22 +36,27 @@ static int z_erofs_lz4_prepare_destpages(struct z_erofs_decompress_req *rq, struct page *availables[LZ4_MAX_DISTANCE_PAGES] = { NULL }; unsigned long bounced[DIV_ROUND_UP(LZ4_MAX_DISTANCE_PAGES, BITS_PER_LONG)] = { 0 }; + unsigned int lz4_distance_pages = LZ4_MAX_DISTANCE_PAGES; void *kaddr = NULL; unsigned int i, j, top; + if (EROFS_SB(rq->sb)->compr_alg) + lz4_distance_pages = DIV_ROUND_UP(EROFS_SB(rq->sb)->compr_alg, + PAGE_SIZE) + 1; + top = 0; for (i = j = 0; i < nr; ++i, ++j) { struct page *const page = rq->out[i]; struct page *victim; - if (j >= LZ4_MAX_DISTANCE_PAGES) + if (j >= lz4_distance_pages) j = 0; /* 'valid' bounced can only be tested after a complete round */ if (test_bit(j, bounced)) { - DBG_BUGON(i < LZ4_MAX_DISTANCE_PAGES); - DBG_BUGON(top >= LZ4_MAX_DISTANCE_PAGES); - availables[top++] = rq->out[i - LZ4_MAX_DISTANCE_PAGES]; + DBG_BUGON(i < lz4_distance_pages); + DBG_BUGON(top >= lz4_distance_pages); + availables[top++] = rq->out[i - lz4_distance_pages]; } if (page) { diff --git a/fs/erofs/erofs_fs.h b/fs/erofs/erofs_fs.h index 9ad1615f4474..bffc02991f5a 100644 --- a/fs/erofs/erofs_fs.h +++ b/fs/erofs/erofs_fs.h @@ -39,7 +39,8 @@ struct erofs_super_block { __u8 uuid[16]; /* 128-bit uuid for volume */ __u8 volume_name[16]; /* volume name */ __le32 feature_incompat; - __u8 reserved2[44]; + __le16 compr_alg; /* compression algorithm specific parameters */ + __u8 reserved2[42]; }; /* diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h index 67a7ec945686..f1c99dc2659f 100644 --- a/fs/erofs/internal.h +++ b/fs/erofs/internal.h @@ -70,6 +70,9 @@ struct erofs_sb_info { /* pseudo inode to manage cached pages */ struct inode *managed_cache; + + /* compression algorithm specific parameters */ + u16 compr_alg; #endif /* CONFIG_EROFS_FS_ZIP */ u32 blocks; u32 meta_blkaddr; diff --git a/fs/erofs/super.c b/fs/erofs/super.c index d5a6b9b888a5..198435e3eb2d 100644 --- a/fs/erofs/super.c +++ b/fs/erofs/super.c @@ -174,6 +174,9 @@ static int erofs_read_superblock(struct super_block *sb) sbi->islotbits = ilog2(sizeof(struct erofs_inode_compact)); sbi->root_nid = le16_to_cpu(dsb->root_nid); sbi->inos = le64_to_cpu(dsb->inos); +#ifdef CONFIG_EROFS_FS_ZIP + sbi->compr_alg = le16_to_cpu(dsb->compr_alg); +#endif sbi->build_time = le64_to_cpu(dsb->build_time); sbi->build_time_nsec = le32_to_cpu(dsb->build_time_nsec); -- 2.25.1