From: Gao Xiang
To deal the with the cases which inplace decompression is infeasible
for some inplace I/O. Per-CPU buffers was introduced to get rid of page
allocation latency and thrash for low-latency decompression algorithms
such as lz4.
For the big pcluster feature, introduce multipage per
> RSP: 002b:7ffe1fa3c2a8 EFLAGS: 0286 ORIG_RAX: 00a5
> RAX: ffda RBX: 7ffe1fa3c300 RCX: 00444f7a
> RDX: 2000 RSI: 2100 RDI: 7ffe1fa3c2c0
> RBP: 00007ffe1fa3c2c0 R08: 7ffe1fa3c300 R09:
>
Thanks for the report. It's a spinlock uninitialization issue actually
due to the new patchset (bisect was wrong here), I will fix it up soon.
Thanks,
Gao Xiang
From: Gao Xiang
To deal the with the cases which inplace decompression is infeasible
for some inplace I/O. Per-CPU buffers was introduced to get rid of page
allocation latency and thrash for low-latency decompression algorithms
such as lz4.
For the big pcluster feature, introduce multipage per
From: Gao Xiang
Enable COMPR_CFGS and BIG_PCLUSTER since the implementations are
all settled properly.
Acked-by: Chao Yu
Signed-off-by: Gao Xiang
---
fs/erofs/erofs_fs.h | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/erofs/erofs_fs.h b/fs/erofs/erofs_fs.h
index
From: Gao Xiang
Prior to big pcluster, there was only one compressed page so it'd
easy to map this. However, when big pcluster is enabled, more work
needs to be done to handle multiple compressed pages. In detail,
- (maptype 0) if there is only one compressed page + no need
to copy inplace
From: Gao Xiang
Different from non-compact indexes, several lclusters are packed
as the compact form at once and an unique base blkaddr is stored for
each pack, so each lcluster index would take less space on avarage
(e.g. 2 bytes for COMPACT_2B.) btw, that is also why BIG_PCLUSTER
switch should
From: Gao Xiang
When INCOMPAT_BIG_PCLUSTER sb feature is enabled, legacy compress indexes
will also have the same on-disk header compact indexes to keep per-file
configurations instead of leaving it zeroed.
If ADVISE_BIG_PCLUSTER is set for a file, CBLKCNT will be loaded for each
pcluster
From: Gao Xiang
Adjust per-CPU buffers on demand since big pcluster definition is
available. Also, bail out unsupported pcluster size according to
Z_EROFS_PCLUSTER_MAX_SIZE.
Acked-by: Chao Yu
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 20
fs/erofs/internal.h
From: Gao Xiang
Big pcluster indicates the size of compressed data for each physical
pcluster is no longer fixed as block size, but could be more than 1
block (more accurately, 1 logical pcluster)
When big pcluster feature is enabled for head0/1, delta0 of the 1st
non-head lcluster index
From: Gao Xiang
When picking up inplace I/O pages, it should be traversed in reverse
order in aligned with the traversal order of file-backed online pages.
Also, index should be updated together when preloading compressed pages.
Previously, only page-sized pclustersize was supported so
From: Gao Xiang
Since multiple pcluster sizes could be used at once, the number of
compressed pages will become a variable factor. It's necessary to
introduce slab pools rather than a single slab cache now.
This limits the pclustersize to 1M (Z_EROFS_PCLUSTER_MAX_SIZE), and
get rid
From: Gao Xiang
To deal the with the cases which inplace decompression is infeasible
for some inplace I/O. Per-CPU buffers was introduced to get rid of page
allocation latency and thrash for low-latency decompression algorithms
such as lz4.
For the big pcluster feature, introduce multipage per
From: Gao Xiang
Formal big pcluster design is actually more powerful / flexable than
the previous thought whose pclustersize was fixed as power-of-2 blocks,
which was obviously inefficient and space-wasting. Instead, pclustersize
can now be set independently for each pcluster, so various
so successfully boot buildroot
& Android system with android-mainline repo.
current mkfs repo for big pcluster:
https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git -b
experimental-bigpcluster-compact
Thanks for your time on reading this!
Thanks,
Gao Xiang
changes since v2:
Hi Joe,
On Tue, Apr 06, 2021 at 08:38:44PM -0700, Joe Perches wrote:
> On Wed, 2021-04-07 at 07:54 +0800, Gao Xiang wrote:
> > Hi Colin,
> >
> > On Tue, Apr 06, 2021 at 05:27:18PM +0100, Colin King wrote:
> > > From: Colin Ian King
> > >
> > > Th
zing it to zero.
>
> Addresses-Coverity: ("Uninitialized scalar variable")
> Fixes: 1aa5f2e2feed ("erofs: support decompress big pcluster for lz4 backend")
> Signed-off-by: Colin Ian King
Thank you very much for catching this! It looks good to me,
Reviewed-by: Gao
On Tue, Apr 06, 2021 at 12:08:55PM +0200, Helge Deller wrote:
> On 4/6/21 6:59 AM, Gao Xiang wrote:
> > From: Gao Xiang
> >
> > commit b344d6a83d01 ("parisc: add support for cmpxchg on u8 pointers")
> > can generate a sparse warningi ("cast truncates
From: Gao Xiang
commit b344d6a83d01 ("parisc: add support for cmpxchg on u8 pointers")
can generate a sparse warningi ("cast truncates bits from constant
value"), which has been reported several times [1] [2] [3].
The original code worked as expected, but anyway, let silence
On Thu, Apr 01, 2021 at 11:29:44AM +0800, Gao Xiang wrote:
> Hi folks,
>
> This is the formal version of EROFS big pcluster support, which means
> EROFS can compress data into more than 1 fs block after this patchset.
>
> {l,p}cluster are EROFS-specific concepts, standing fo
On Wed, Mar 31, 2021 at 05:39:20AM -0400, Ruiqi Gong wrote:
> zmap.c: s/correspoinding/corresponding
> zdata.c: s/endding/ending
>
> Reported-by: Hulk Robot
> Signed-off-by: Ruiqi Gong
Reviewed-by: Gao Xiang
Thanks,
Gao Xiang
From: Gao Xiang
Enable COMPR_CFGS and BIG_PCLUSTER since the implementations are
all settled properly.
Signed-off-by: Gao Xiang
---
fs/erofs/erofs_fs.h | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/erofs/erofs_fs.h b/fs/erofs/erofs_fs.h
index ecc3a0ea0bc4
From: Gao Xiang
When picking up inplace I/O pages, it should be traversed in reverse
order in aligned with the traversal order of file-backed online pages.
Also, index should be updated together when preloading compressed pages.
Previously, only page-sized pclustersize was supported so
From: Gao Xiang
When INCOMPAT_BIG_PCLUSTER sb feature is enabled, legacy compress indexes
will also have the same on-disk header compact indexes to keep per-file
configurations instead of leaving it zeroed.
If ADVISE_BIG_PCLUSTER is set for a file, CBLKCNT will be loaded for each
pcluster
From: Gao Xiang
Different from non-compact indexes, several lclusters are packed
as the compact form at once and an unique base blkaddr is stored for
each pack, so each lcluster index would take less space on avarage
(e.g. 2 bytes for COMPACT_2B.) btw, that is also why BIG_PCLUSTER
switch should
From: Gao Xiang
Prior to big pcluster, there was only one compressed page so it'd
easy to map this. However, when big pcluster is enabled, more work
needs to be done to handle multiple compressed pages. In detail,
- (maptype 0) if there is only one compressed page + no need
to copy inplace
From: Gao Xiang
Adjust per-CPU buffers on demand since big pcluster definition is
available. Also, bail out unsupported pcluster size according to
Z_EROFS_PCLUSTER_MAX_SIZE.
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 16
fs/erofs/internal.h | 2 ++
2 files
From: Gao Xiang
Big pcluster indicates the size of compressed data for each physical
pcluster is no longer fixed as block size, but could be more than 1
block (more accurately, 1 logical pcluster)
When big pcluster feature is enabled for head0/1, delta0 of the 1st
non-head lcluster index
From: Gao Xiang
To deal the with the cases which inplace decompression is infeasible
for some inplace I/O. Per-CPU buffers was introduced to get rid of page
allocation latency and thrash for low-latency decompression algorithms
such as lz4.
For the big pcluster feature, introduce multipage per
From: Gao Xiang
Since multiple pcluster sizes could be used at once, the number of
compressed pages will become a variable factor. It's necessary to
introduce slab pools rather than a single slab cache now.
This limits the pclustersize to 1M (Z_EROFS_PCLUSTER_MAX_SIZE), and
get rid
so successfully boot buildroot
& Android system with android-mainline repo.
current mkfs repo for big pcluster:
https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git -b
experimental-bigpcluster-compact
Thanks for your time on reading this!
Thanks,
Gao Xiang
changes
From: Gao Xiang
Formal big pcluster design is actually more powerful / flexable than
the previous thought whose pclustersize was fixed as power-of-2 blocks,
which was obviously inefficient and space-wasting. Instead, pclustersize
can now be set independently for each pcluster, so various
From: Gao Xiang
Enable COMPR_CFGS and BIG_PCLUSTER since the implementations are
all settled properly.
Signed-off-by: Gao Xiang
---
fs/erofs/erofs_fs.h | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/erofs/erofs_fs.h b/fs/erofs/erofs_fs.h
index ecc3a0ea0bc4
From: Gao Xiang
When INCOMPAT_BIG_PCLUSTER sb feature is enabled, legacy compress indexes
will also have the same on-disk header compact indexes to keep per-file
configurations instead of leaving it zeroed.
If ADVISE_BIG_PCLUSTER is set for a file, CBLKCNT will be loaded for each
pcluster
From: Gao Xiang
Different from non-compact indexes, several lclusters are packed
as the compact form at once and an unique base blkaddr is stored for
each pack, so each lcluster index would take less space on avarage
(e.g. 2 bytes for COMPACT_2B.) btw, that is also why BIG_PCLUSTER
switch should
From: Gao Xiang
Prior to big pcluster, there is only one compressed page so it'd
easy to map this. However, when big pcluster is enabled, more work
needs to be done to handle multiple compressed pages. In detail,
- (maptype 0) if there is only one compressed page + no need
to copy inplace I
From: Gao Xiang
Since multiple pcluster sizes could be used at once, the number of
compressed pages will become a variable factor. It's necessary to
introduce slab pools rather than a single slab cache now.
This limits the pclustersize to 1M (Z_EROFS_PCLUSTER_MAX_SIZE), and
get rid
From: Gao Xiang
Big pcluster indicates the size of compressed data for each physical
pcluster is no longer fixed as block size, but could be more than 1
block (more accurately, 1 logical pcluster)
When big pcluster feature is enabled for head0/1, delta0 of the 1st
non-head lcluster index
From: Gao Xiang
When picking up inplace I/O pages, it should be traversed in reverse
order in aligned with the traversal order of file-backed online pages.
Also, index should be updated together when preloading compressed pages.
Previously, only page-sized pclustersize was supported so
From: Gao Xiang
Adjust per-CPU buffers on demand since big pcluster definition is
available. Also, bail out unsupported pcluster size according to
Z_EROFS_PCLUSTER_MAX_SIZE.
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 16
fs/erofs/internal.h | 2 ++
2 files
From: Gao Xiang
Hi folks,
This is the formal version of EROFS big pcluster support, which means
EROFS can compress data into more than 1 fs block after this patchset.
{p,l}cluster are EROFS-specific concepts, standing for `logical cluster'
and `physical cluster' correspondingly. Logical
From: Gao Xiang
To deal the with the cases which inplace decompression is infeasible
for some inplace I/O. Per-CPU buffers was introduced to get rid of page
allocation latency and thrash for low-latency decompression algorithms
such as lz4.
For the big pcluster feature, introduce multipage per
From: Gao Xiang
Formal big pcluster design is actually more powerful / flexable than
the previous thought whose pclustersize was fixed as power-of-2 blocks,
which was obviously inefficient and space-wasting. Instead, pclustersize
can now be set independently for each pcluster, so various
From: Gao Xiang
Add a bitmap for available compression algorithms and a variable-sized
on-disk table for compression options in preparation for upcoming big
pcluster and LZMA algorithm, which follows the end of super block.
To parse the compression options, the bitmap is scanned one by one
gt; If that does't look ok for us, I could use > 80 line for this instead,
> > but I tend to not break the message ..
>
> Xiang,
>
> Ah, I didn't notice this is following above style, if it's fine to you,
> let's use some tabs in front of message line, though it will cause
> exceeding 80 line warning.
>
I found a reference here,
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/coding-style.rst?h=v5.11#n99
also vaguely remembered some threads from Linus, but hard to find now :-(
ok, I will insert tabs instead, thanks for the suggestion!
Thanks,
Gao Xiang
> Thanks,
Hi Chao,
On Mon, Mar 29, 2021 at 02:26:05PM +0800, Chao Yu wrote:
> On 2021/3/29 9:23, Gao Xiang wrote:
> > From: Gao Xiang
> >
> > Add a bitmap for available compression algorithms and a variable-sized
> > on-disk table for compression options in preparation fo
-sized 1000m).
For the comparison of other filesystems, see:
https://github.com/erofs/erofs-openbenchmark/wiki
Thanks,
Gao Xiang
RAW DATA:
benchmarking imgs/enwik9_4k.erofs.compacted.img with erofs
mntdir/enwik9
[seqread]
READ: bw=832MiB/s (873MB/s), 832MiB/s-832MiB/s (873MB/s-873MB/s), io=954MiB
From: Gao Xiang
Add a bitmap for available compression algorithms and a variable-sized
on-disk table for compression options in preparation for upcoming big
pcluster and LZMA algorithm, which follows the end of super block.
To parse the compression options, the bitmap is scanned one by one
From: Gao Xiang
Introduce z_erofs_lz4_cfgs to store all lz4 configurations.
Currently it's only max_distance, but will be used for new
features later.
Reviewed-by: Chao Yu
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 15 +--
fs/erofs/erofs_fs.h | 6 ++
fs/erofs
number of concurrent IOs. So appropriately
reducing this value can improve performance.
Decreasing this value will reduce the compression ratio (except
when input_size
Signed-off-by: Guo Weichao
[ Gao Xiang: introduce struct erofs_sb_lz4_info for configurations. ]
Reviewed-by: Chao Yu
Signed
From: Gao Xiang
Introduce erofs_sb_has_xxx() to make long checks short, especially
for later big pcluster & LZMA features.
Reviewed-by: Chao Yu
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 3 +--
fs/erofs/internal.h | 9 +
fs/erofs/super.c| 2 +-
3 files cha
From: Gao Xiang
Hi folks,
When we provides support for different algorithms or big pcluster, it'd
be necessary to record some configuration in the per-fs basis.
For example, when big pcluster feature for lz4 is enabled, we need to
know the largest pclustersize in the whole fs instance
From: Gao Xiang
If any unknown i_format fields are set (may be of some new incompat
inode features), mark such inode as unsupported.
Just in case of any new incompat i_format fields added in the future.
Fixes: 431339ba9042 ("staging: erofs: add inode operations")
Cc: # 4.19+
Hi Chao,
On Sat, Mar 27, 2021 at 05:46:44PM +0800, Chao Yu wrote:
> On 2021/3/27 11:49, Gao Xiang wrote:
> > From: Gao Xiang
> >
> > Add a bitmap for available compression algorithms and a variable-sized
> > on-disk table for compression options in preparation fo
Hi Chao,
On Sat, Mar 27, 2021 at 05:34:33PM +0800, Chao Yu wrote:
> On 2021/3/27 11:49, Gao Xiang wrote:
> > From: Huang Jianan
> >
> > lz4 uses LZ4_DISTANCE_MAX to record history preservation. When
> > using rolling decompression, a block with a higher compression
&g
,
> it can eliminate the sluggish issue caused by slow foreground GC
> operation when GC is triggered from a process with limited I/O
> and CPU resources.
>
> Original idea is from Xiang.
>
> Signed-off-by: Gao Xiang
> Signed-off-by: Chao Yu
Ah, that was a quite old comm
From: Gao Xiang
Add a bitmap for available compression algorithms and a variable-sized
on-disk table for compression options in preparation for upcoming big
pcluster and LZMA algorithm, which follows the end of super block.
To parse the compression options, the bitmap is scanned one by one
From: Gao Xiang
Introduce z_erofs_lz4_cfgs to store all lz4 configurations.
Currently it's only max_distance, but will be used for new
features later.
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 15 +--
fs/erofs/erofs_fs.h | 6 ++
fs/erofs/internal.h | 8
number of concurrent IOs. So appropriately
reducing this value can improve performance.
Decreasing this value will reduce the compression ratio (except
when input_size
Signed-off-by: Guo Weichao
[ Gao Xiang: introduce struct erofs_sb_lz4_info for configurations. ]
Signed-off-by: Gao Xiang
From: Gao Xiang
Hi folks,
When we provides support for different algorithms or big pcluster, it'd
be necessary to record some configuration in the per-fs basis.
For example, when big pcluster feature for lz4 is enabled, we need to
know the largest pclustersize in the whole fs instance
From: Gao Xiang
Introduce erofs_sb_has_xxx() to make long checks short, especially
for later big pcluster & LZMA features.
Signed-off-by: Gao Xiang
---
fs/erofs/decompressor.c | 3 +--
fs/erofs/internal.h | 9 +
fs/erofs/super.c| 2 +-
3 files changed, 11 insertions(+
On Sun, Mar 21, 2021 at 11:36:10PM -0400, Theodore Ts'o wrote:
> On Mon, Mar 22, 2021 at 11:05:13AM +0800, Gao Xiang wrote:
> > I think the legel name would be "Zhang Yi" (family name goes first [1])
> > according to
> > The Chinese phonetic alphabet spel
wrote my own name as this but I also noticed the western
ordering of names is quite common for Chinese people in Linux kernel.
Anyway, it's just my preliminary personal thought (might be just my
own perference) according to (I think, maybe) formal occasions.
[1] https://en.wikipedia.org/wiki/Wikipedia:Naming_conventions_(Chinese)
[2] http://www.moe.gov.cn/ewebeditor/uploadfile/2015/01/13/20150113091249368.pdf
[3] https://en.wikipedia.org/wiki/Yao_Ming
[4] https://www.nbcsports.com/edge/basketball/nba/player/28778/yao-ming
[5]
https://news.cgtn.com/news/2020-09-26/Spotlight-Ex-NBA-star-Yao-Ming-05-02-2018-U36mm3dYas/index.html
Thanks,
Gao Xiang
>
> Thanks,
> Yi.
From: Gao Xiang
Add a missing case which could cause unnecessary page allocation but
not directly use inplace I/O instead, which increases runtime extra
memory footprint.
The detail is, considering an online file-backed page, the right half
of the page is chosen to be cached (e.g. the end page
Hi Chao,
On Fri, Mar 19, 2021 at 10:15:18AM +0800, Chao Yu wrote:
> On 2021/3/6 12:04, Gao Xiang wrote:
...
> > + (*last_block + 1 != current_block || !*eblks)) {
>
> Xiang,
>
> I found below function during checking bi_max_vecs usage in f2fs:
>
> /**
>
age()
> after we pass __GFP_NOFAIL parameter.
Yeah, good point! sorry I forgot that.
Jianan,
Could you take some time resending the next version with all new things
updated?... thus Chao could review easily, Thanks!
Thanks,
Gao Xiang
>
> Thanks,
>
> > set_page_private(victim, Z_EROFS_SHORTLIVED_PAGE);
> >
>
and sync decompression for atomic contexts only */
+ if (in_atomic() || irqs_disabled()) {
queue_work(z_erofs_workqueue, >u.work);
+ sbi->ctx.readahead_sync_decompress = true;
+ return;
+ }
+ z_erofs_decompressqueue_work(>u.work);
}
Otherwise, it looks good to me. I've applied to dev-test
for preliminary testing.
Reviewed-by: Gao Xiang
Thanks,
Gao Xiang
adding another !Uptodate case
> for such case.
>
> Signed-off-by: Huang Jianan
> Signed-off-by: Guo Weichao
Reviewed-by: Gao Xiang
Thanks,
Gao Xiang
Hi Linus,
Could you consider this pull request for 5.11-rc3?
All details about this new regression are as below.
All commits have been tested and have been in -next for days.
This merges cleanly with master.
Thanks,
Gao Xiang
The following changes since commit
On Tue, Mar 09, 2021 at 04:53:41PM +0100, Christoph Hellwig wrote:
> Add a new alloc_anon_inode helper that allocates an inode on
> the anon_inode file system.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Gao Xiang
Thanks,
Gao Xiang
ucing an unique
fs...
Reviewed-by: Gao Xiang
Thanks,
Gao Xiang
Signed-off-by: Gao Xiang
---
Documentation/admin-guide/sysrq.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/sysrq.rst
b/Documentation/admin-guide/sysrq.rst
index 67dfa4c29093..60ce5f5ebab6 100644
--- a/Documentation/admin-guide/sysrq.rst
+++ b/Doc
On Mon, Mar 08, 2021 at 10:52:19AM +0800, Chao Yu wrote:
> On 2021/3/8 10:36, Gao Xiang wrote:
> > Hi Chao,
> >
> > On Mon, Mar 08, 2021 at 09:29:30AM +0800, Chao Yu wrote:
> > > On 2021/3/6 12:04, Gao Xiang wrote:
> > > > From: Gao Xiang
> > >
Hi Chao,
On Mon, Mar 08, 2021 at 09:29:30AM +0800, Chao Yu wrote:
> On 2021/3/6 12:04, Gao Xiang wrote:
> > From: Gao Xiang
> >
> > Martin reported an issue that directory read could be hung on the
> > latest -rc kernel with some certain image. The root cause is t
From: Gao Xiang
Martin reported an issue that directory read could be hung on the
latest -rc kernel with some certain image. The root cause is that
commit baa2c7c97153 ("block: set .bi_max_vecs as actual allocated
vector number") changes .bi_max_vecs behavior. bio->bi_max_vecs
is
From: Gao Xiang
Martin reported an issue that directory read could be hung on the
latest -rc kernel with some certain image. The root cause is that
commit baa2c7c97153 ("block: set .bi_max_vecs as actual allocated
vector number") changes .bi_max_vecs behavior. bio->bi_max_vecs
is
= false;
> +#endif
How about moving this stuff to erofs_default_options() as what we
did for max_sync_decompress_pages?
Thanks,
Gao Xiang
victim = availables[--top];
> get_page(victim);
> } else {
> - victim = erofs_allocpage(pagepool, GFP_KERNEL);
> + victim = erofs_allocpage(pagepool, GFP_KERNEL |
> __GFP_NOFAIL);
80 char limi
_page_for_submission(struct z_erofs_pcluster *pcl,
> unsigned int nr,
> struct list_head *pagepool,
> @@ -1333,7 +1342,8 @@ static void z_erofs_readahead(struct readahead_control
> *rac)
> s
On Tue, Feb 23, 2021 at 03:44:18PM +0800, Gao Xiang wrote:
> On Tue, Feb 23, 2021 at 03:31:19PM +0800, Huang Jianan via Linux-erofs wrote:
> > lz4 uses LZ4_DISTANCE_MAX to record history preservation. When
> > using rolling decompression, a block with a higher compression
>
ofs_sb_info *sbi,
> + struct erofs_super_block *dsb)
> +{
> + u16 distance = le16_to_cpu(dsb->lz4_max_distance);
> +
> + sbi->lz4_max_distance_pages = distance ?
> + (DIV_ROUND_UP(distance, PAGE_SIZE) + 1)
> :
Unneeded parentheses here (I'll update it when applying).
Otherwise it looks good to me.
Reviewed-by: Gao Xiang
Thanks,
Gao Xiang
(cont. the previous reply)
On Tue, Feb 23, 2021 at 01:19:26PM +0800, Gao Xiang wrote:
...
> > + __le16 lz4_max_distance;/* lz4 max distance */
unneeded comment.
> > + __u8 reserved2[42];
> > };
> >
> > /*
> > diff --git a/fs/erofs/intern
w about adding a new helper e.g. z_erofs_load_lz4_config(sb, dsb)
in decompressor.c, and
int z_erofs_load_lz4_config(sb, dsb)
{
if (dsb->lz4_max_distance)
sbi->lz4_max_distance_pages = DIV_ROUND_UP ...
else
sbi->lz4_max_distance_pages = LZ4_MAX_DISTANCE_PAGES;
return 0;
}
Also add a declaration in internal.h:
#ifdef CONFIG_EROFS_FS_ZIP
int z_erofs_load_lz4_config.
#else
static inline int z_erofs_load_lz4_config() { return 0; }
#endif
Thanks,
Gao Xiang
>
> sbi->build_time = le64_to_cpu(dsb->build_time);
> sbi->build_time_nsec = le32_to_cpu(dsb->build_time_nsec);
> --
> 2.25.1
>
On Tue, Feb 23, 2021 at 10:03:59AM +0800, Huang Jianan wrote:
> Hi Xiang,
>
> On 2021/2/22 12:44, Gao Xiang wrote:
> > Hi Jianan,
> >
> > On Thu, Feb 18, 2021 at 08:00:49PM +0800, Huang Jianan via Linux-erofs
> > wrote:
> > > From: huangjianan
> &
tance;
/* 4-byte aligned */
compr alg x (if available) u8 alg_opt_size;
...
...
When reading sb, first, it scans the whole bitmap, and get all the
available algorithms in the image at once. And then read such compr
opts one-by-one.
Do you have some interest and extra time to implement it? :) That
makes me work less since I'm debugging mbpcluster compression now...
Thanks,
Gao Xiang
a week. This merges cleanly with master.
Thanks,
Gao Xiang
The following changes since commit 19c329f6808995b142b3966301f217c831e7cf31:
Linux 5.11-rc4 (2021-01-17 16:37:05 -0800)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git
tags/erofs
Hi Chao,
On Wed, Jan 20, 2021 at 09:30:16AM +0800, Gao Xiang wrote:
> From: Gao Xiang
>
> syzbot generated a crafted bitszbits which can be shifted
> out-of-bounds[1]. So directly print unsupported blkszbits
> instead of blksize.
>
> [1] https://lore.kernel.org/r/
Hi Chao,
On Wed, Feb 10, 2021 at 08:09:22PM +0800, Chao Yu wrote:
> Hi Xiang,
>
> On 2021/2/9 21:06, Gao Xiang via Linux-erofs wrote:
> > From: Gao Xiang
> >
> > Currently, although set_bit() & test_bit() pairs are used as a fast-
> > path for initialized
From: Gao Xiang
Currently, although set_bit() & test_bit() pairs are used as a fast-
path for initialized configurations. However, these atomic ops are
actually relaxed forms. Instead, load-acquire & store-release form is
needed to make sure uninitialized fields won't be observed in adva
ches could be merged as one patch if possible,
although just my own thoughts.
Thanks,
Gao Xiang
> Signed-off-by: Chaitanya Kulkarni
> ---
> fs/erofs/data.c | 6 ++
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index ea
From: Gao Xiang
syzbot generated a crafted bitszbits which can be shifted
out-of-bounds[1]. So directly print unsupported blkszbits
instead of blksize.
[1] https://lore.kernel.org/r/c72ddd05b9444...@google.com
Reported-by: syzbot+c68f467cd7c45860e...@syzkaller.appspotmail.com
Signed
, AOSP) fix sub-directory prefix for canned fs_config.
Thanks,
Gao Xiang
| ^~~~
> fs/erofs/namei.c:237:3: note: in expansion of macro 'erofs_dbg'
> 237 | erofs_dbg("%pd, %s (nid %llu) found, d_type %u", __func__,
> | ^
Thanks for modifying this. Use %pd is more reasonable than using d_name...
It
we don't pay much attention to the change of iomap.
>
> No, that is never an excuse for upstream development.
Ok, personally I also agree this, let's go further in this way.
Thanks,
Gao Xiang
>
, hope that
Jianan could pick this work up. That would be better.
Thanks,
Gao Xiang
>
t)
> +{
> + unsigned i_blkbits = READ_ONCE(inode->i_blkbits);
It would be better to fold in check_direct_IO, also the READ_ONCE above
is somewhat weird...
No rush here, since 5.11-rc1 haven't be out yet, we have >= 2 months to
work it out.
Thanks,
Gao Xiang
> + unsigned
o some different overlapped memcpy() implementation which was reviewed
and added to akpm tree, hopefully upstream for this 5.11 cycle too. ]
Thanks,
Gao Xiang
[1] https://lore.kernel.org/r/20201122030749.2698994-1-hsiang...@redhat.com
The following changes since
ocation length overflow in xfs_bmapi_write()")
and the reason for adding this is still valid for now?
Thanks,
Gao Xiang
; Fixes: 9da681e017a3 ("staging: erofs: support bmap")
> Signed-off-by: Huang Jianan
> Signed-off-by: Guo Weichao
Reviewed-by: Gao Xiang
Also, I think Chao has sent his Reviewed-by in the previous reply ---
so unless some major modification happens, it needs to be attached with
all ne
From: Gao Xiang
Try to forcely switch to inplace I/O under low memory scenario in
order to avoid direct memory reclaim due to cached page allocation.
Reviewed-by: Chao Yu
Signed-off-by: Gao Xiang
---
v2:
refine the gfp definition.
fs/erofs/compress.h | 3 +++
fs/erofs/zdata.c| 48
its,
> > should avoid using generic_block_bmap.
> >
> > Fixes: 9da681e017a3 ("staging: erofs: support bmap")
> > Signed-off-by: Huang Jianan
> > Signed-off-by: Guo Weichao
Could you send out an updated version? I might get a point to freeze
dev b
1 - 100 of 493 matches
Mail list logo