Eric Biggers 于2022年11月2日周三 11:18写道:
>
> On Wed, Nov 02, 2022 at 11:06:17AM +0800, fengnan chang wrote:
> >
> >
> > > 2022年11月2日 10:05,Eric Biggers 写道:
> > >
> > > On Wed, Jun 08, 2022 at 09:48:52PM +0800, Fengnan Chang wrote:
> > >> When
> 2022年11月2日 10:05,Eric Biggers 写道:
>
> On Wed, Jun 08, 2022 at 09:48:52PM +0800, Fengnan Chang wrote:
>> When decompressed failed, f2fs_prepare_compress_overwrite will enter
>> endless loop, may casue hungtask.
>>
>> [ 14.088665] F2FS-fs (nvme0n1):
friendly ping...
fengnan chang 于2022年10月14日周五 16:46写道:
>
> ping, it seems this had been forgotten.
>
> > 2022年6月8日 21:48,Fengnan Chang 写道:
> >
> > When decompressed failed, f2fs_prepare_compress_overwrite will enter
> > endless loop, may casue hungtask.
> &
ping, it seems this had been forgotten.
> 2022年6月8日 21:48,Fengnan Chang 写道:
>
> When decompressed failed, f2fs_prepare_compress_overwrite will enter
> endless loop, may casue hungtask.
>
> [ 14.088665] F2FS-fs (nvme0n1): lz4 decompress failed, ret:-4155
> [ 14.089
n ratio also changed.
I used to use trace way to account this, it's quite difficult to calculate.
>
> On 07/31, Fengnan Chang wrote:
>> Try to support compressed file write and amplifiction accounting.
>>
>> Signed-off-by: Fengnan Chang
>> ---
>&g
From: Fengnan Chang
When write total cluster, all pages is uptodate, there is not need to call
f2fs_prepare_compress_overwrite, intorduce f2fs_all_cluster_page_ready
to avoid this.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 21 ++---
fs/f2fs/data.c | 8
Try to support compressed file write and amplifiction accounting.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 7 +--
fs/f2fs/data.c | 44
fs/f2fs/debug.c| 7 +--
fs/f2fs/f2fs.h | 36
From: Fengnan Chang
Optimise f2fs_write_cache_pages, and support compressed file write/read
amplifiction accounting.
v4:
fix read amplifiction accounting when read one compressed page.
v3:
fix enable COMPRESS_CACHE may make read amplifiction accounting
incorrect.
Fengnan Chang (3):
f2fs
Since pvec have 15 pages, it not a multiple of 4, when write compressed
pages, write in 64K as a unit, it will call pagevec_lookup_range_tag
agagin, sometimes this will take a lot of time.
Use onstack pages instead of pvec to mitigate this problem.
Signed-off-by: Fengnan Chang
---
fs/f2fs
> 2022年7月24日 17:58,Chao Yu 写道:
>
> On 2022/7/17 13:32, Fengnan Chang wrote:
>> From: Fengnan Chang
>> Try to support compressed file write and amplifiction accounting.
>> Signed-off-by: Fengnan Chang
>> ---
>> fs/f2fs/data.c | 26
From: Fengnan Chang
Since pvec have 15 pages, it not a multiple of 4, when write compressed
pages, write in 64K as a unit, it will call pagevec_lookup_range_tag
agagin, sometimes this will take a lot of time.
Use onstack pages instead of pvec to mitigate this problem.
Signed-off-by: Fengnan
From: Fengnan Chang
Try to support compressed file write and amplifiction accounting.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 26 +-
fs/f2fs/debug.c | 7 +--
fs/f2fs/f2fs.h | 34 ++
3 files changed, 60 insertions(+), 7
From: Fengnan Chang
Optimise f2fs_write_cache_pages, and support compressed file write/read
amplifiction accounting.
Fengnan Chang (3):
f2fs: intorduce f2fs_all_cluster_page_ready
f2fs: use onstack pages instead of pvec
f2fs: support compressed file write/read amplifiction
fs/f2fs
From: Fengnan Chang
When write total cluster, all pages is uptodate, there is no need to call
f2fs_prepare_compress_overwrite, intorduce f2fs_all_cluster_page_ready
to avoid this.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 17 +
fs/f2fs/data.c | 8 ++--
fs
ping
Fengnan Chang via Linux-f2fs-devel
于2022年5月7日周六 16:18写道:
>
> Try to support compressed file write and amplifiction accounting.
>
> Signed-off-by: Fengnan Chang
> ---
> fs/f2fs/data.c | 19 +++
> fs/f2fs/debug.c | 7 +--
&g
When decompressed failed, f2fs_prepare_compress_overwrite will enter
endless loop, may casue hungtask.
[ 14.088665] F2FS-fs (nvme0n1): lz4 decompress failed, ret:-4155
[ 14.089851] F2FS-fs (nvme0n1): lz4 decompress failed, ret:-4155
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 21
ping...
Fengnan Chang via Linux-f2fs-devel
于2022年5月7日周六 16:18写道:
>
> Optimise f2fs_write_cache_pages, and support compressed file write/read
> amplifiction accounting.
>
> Fengnan Chang (3):
> f2fs: intorduce f2fs_all_cluster_page_ready
> f2fs: use onstack pages ins
ping...
Fengnan Chang 于2022年5月11日周三 15:14写道:
>
> When decompressed failed, f2fs_prepare_compress_overwrite will enter
> endless loop, may casue hungtask.
>
> [ 14.088665] F2FS-fs (nvme0n1): lz4 decompress failed, ret:-4155
> [ 14.089851] F2FS-fs (nvme0n1): lz4 decompress
When decompressed failed, f2fs_prepare_compress_overwrite will enter
endless loop, may casue hungtask.
[ 14.088665] F2FS-fs (nvme0n1): lz4 decompress failed, ret:-4155
[ 14.089851] F2FS-fs (nvme0n1): lz4 decompress failed, ret:-4155
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 9
When write total cluster, all pages is uptodate, there is not need to call
f2fs_prepare_compress_overwrite, intorduce f2fs_all_cluster_page_ready
to avoid this.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 11 ---
fs/f2fs/data.c | 9 +++--
fs/f2fs/f2fs.h | 4
Since pvec have 15 pages, it not a multiple of 4, when write compressed
pages, write in 64K as a unit, it will call pagevec_lookup_range_tag
agagin, sometimes this will take a lot of time.
Use onstack pages instead of pvec to mitigate this problem.
Signed-off-by: Fengnan Chang
---
fs/f2fs
Try to support compressed file write and amplifiction accounting.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 19 +++
fs/f2fs/debug.c | 7 +--
fs/f2fs/f2fs.h | 34 ++
3 files changed, 54 insertions(+), 6 deletions(-)
diff --git a/fs
Optimise f2fs_write_cache_pages, and support compressed file write/read
amplifiction accounting.
Fengnan Chang (3):
f2fs: intorduce f2fs_all_cluster_page_ready
f2fs: use onstack pages instead of pvec
f2fs: support compressed file write/read amplifiction
fs/f2fs/compress.c | 15
Optimise f2fs_write_cache_pages, and support compressed file write
amplifiction accounting.
Fengnan Chang (3):
f2fs: intorduce f2fs_all_cluster_page_uptodate
f2fs: use onstack pages instead of pvec
f2fs: support compressed file write amplifiction accounting
fs/f2fs/compress.c | 27
Try to support compressed file write amplifiction accounting.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 14 ++
fs/f2fs/debug.c | 5 +++--
fs/f2fs/f2fs.h | 17 +
3 files changed, 30 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
Intorduce f2fs_all_cluster_page_uptodate, try to reduce call
f2fs_prepare_compress_overwrite.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 23 ++-
fs/f2fs/data.c | 5 +
fs/f2fs/f2fs.h | 2 ++
3 files changed, 29 insertions(+), 1 deletion(-)
diff --git
Since pvec have 15 pages, it not a multiple of 4, when write compressed
pages, write in 64K as a unit, it will call pagevec_lookup_range_tag
agagin, sometimes this will take a lot of time.
Use onstack pages instead of pvec to mitigate this problem.
Signed-off-by: Fengnan Chang
---
fs/f2fs
Try support forword recovery for compressed files, this is a rough version,
need more test to improve it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/node.c | 7 +++
fs/f2fs/recovery.c | 9 +
2 files changed, 16 insertions(+)
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index
Try support follword recovery for compressed files, this is a rough
version, need more test to improve it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/node.c | 7 +++
fs/f2fs/recovery.c | 10 +-
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/node.c b/fs
Notify when mount filesystem with -o inlinecrypt option, but the device
not support inlinecrypt.
Signed-off-by: Fengnan Chang
---
fs/f2fs/f2fs.h | 18 ++
fs/f2fs/super.c | 7 +++
2 files changed, 25 insertions(+)
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index
Notify when mount filesystem with -o inlinecrypt option, but the device
not support inlinecrypt.
Signed-off-by: Fengnan Chang
---
fs/ext4/super.c | 12
1 file changed, 12 insertions(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 81749eaddf4c..f91454d3a877 100644
--- a/fs
Introduce blk_crypto_supported, Filesystems may use this to check wheather
storage device support inline encryption.
Signed-off-by: Fengnan Chang
---
block/blk-crypto.c | 6 +-
include/linux/blk-crypto.h | 5 +
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/block
Reported-by: kernel test robot
Reported-by: Dan Carpenter
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 2 +-
fs/f2fs/file.c | 5 -
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index b09f401f8960..5675af1b6916 100644
--- a/fs/f2fs/data.c
+
Reported-by: kernel test robot
Reported-by: Dan Carpenter
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 2 +-
fs/f2fs/file.c | 5 -
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index b09f401f8960..5675af1b6916 100644
--- a/fs/f2fs/data.c
+
ty, and in f2fs_commit_inmem_pages(), we will write
partial raw pages into compressed cluster, result in corrupting compressed
cluster layout.
Fixes: 4c8ff7095bef ("f2fs: support data compression")
Fixes: 7eab7a696827 ("f2fs: compress: remove unneeded read when rewrite whole
cluster")
Signed-
ty, and in f2fs_commit_inmem_pages(), we will write
partial raw pages into compressed cluster, result in corrupting compressed
cluster layout.
Fixes: 4c8ff7095bef ("f2fs: support data compression")
Fixes: 7eab7a696827 ("f2fs: compress: remove unneeded read when rewrite
whole cluster")
Signed-
-compressed cluster, so it's ok.
Fixes: 4c8ff7095bef (f2fs: support data compression)
Fixes: 7eab7a696827 (f2fs: compress: remove unneeded read when rewrite whole
cluster)
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 2 +-
fs/f2fs/file.c | 3 ++-
2 files changed, 3 insertions(+), 2 dele
rewrite whole
cluster)
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 4 +---
fs/f2fs/file.c | 3 ++-
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 6b5f389ba998..5cbee4ed0982 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3358,8
Compress page will invalidate in truncate block process too, so remove
redunant invalidate compress pages in f2fs_evict_inode.
Signed-off-by: Fengnan Chang
---
fs/f2fs/inode.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
index
Compress page will invalidate in truncate block process too, so remove
redunant invalidate compress pages in f2fs_evict_inode.
In normal case, f2fs_evict_inode only called when i_nlink become 0, so
unlikely.
Signed-off-by: Fengnan Chang
---
fs/f2fs/inode.c | 3 ++-
1 file changed, 2 insertions
Compress page will invalidate in truncate block process too, so remove
redunant invalidate compress pages in f2fs_evict_inode.
Signed-off-by: Fengnan Chang
---
fs/f2fs/inode.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
index 935016e56010
Great work,it fix my problem.
___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
Hi chao:
As I mentioned before,
https://lore.kernel.org/linux-f2fs-devel/kl1pr0601mb400309c5d62bfddde6aad8aebb...@kl1pr0601mb4003.apcprd06.prod.outlook.com/T/#mbe9a8f27626ac7ca71035e25f5502e756ab877ac
there is a potential dead lock problem when just remove
compress file condition in __should_seria
When enable compress_cache option, in my test envrionment, sometime
f2fs_invalidate_compress_pages will take long time to finish,
find_get_pages_range take most time, is there anyone encounter this problem
too? In my test, I have 8 files, each file size was 64MB, do some seq and
random read or wr
Since compress inode not a regular file, generic_error_remove_page in
f2fs_invalidate_compress_pages will always be failed, set compress
inode as a regular file to fix it.
Fixes: 6ce19aff0b8c ("f2fs: compress: add compress_inode to cache compressed
blocks")
Signed-off-by: Fengnan Chang
e_fadvise() for POSIX_FADV_DONTNEED case.
Signed-off-by: Fengnan Chang
---
fs/f2fs/file.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 32c0bd545c5c..20f44cc8dfd1 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -4677,12 +46
Since compress inode not a regular file, generic_error_remove_page in
f2fs_invalidate_compress_pages will always be failed, set compress
inode as a regular file to fix it.
Fixes: 6ce19aff0b8c ("f2fs: compress: add compress_inode to cache compressed
blocks")
Signed-off-by: Fengnan Chan
e_fadvise() for POSIX_FADV_DONTNEED case.
Signed-off-by: Fengnan Chang
---
fs/f2fs/file.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 32c0bd545c5c..dafdaad9a9e4 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -4677,12 +46
Since compress inode not a regular file, generic_error_remove_page in
f2fs_invalidate_compress_pages will always be failed, set compress
inode as a regular file to fix it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/inode.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/f2fs/inode.c b/fs
Previously, compressed page cache drop when clean page cache, but
POSIX_FADV_DONTNEED can't clean compressed page cache, this commit
try to support it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 10 --
fs/f2fs/f2fs.h | 7 ---
2 files changed, 12 insertions(
Don't alloc new page pointers array to replace old, just use old, introduce
valid_nr_cpages to indicate valid number of page pointers in array, try to
reduce one page array alloc and free when write compress page.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c
Don't alloc new page pointers array to replace old, just use old, introduce
valid_nr_cpages to indicate valid number of page pointers in array, try to
reduce one page array alloc and free when write compress page.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c
In my test, serial io for compress file will make multithread small write
performance drop a lot.
I'm try to fingure out why we need __should_serialize_io, IMO, we use
__should_serialize_io to avoid deadlock or try to
improve sequential performance, but I don't understand why we should do this fo
Add "f2fs_lzo_compress_private" and "f2fs_lz4_compress_private" slab
cache, to speed up memory allocation when init compress ctx.
No slab cache is added to zstd as the private data for zstd is related to
mount option, and too big.
Signed-off-by: Fengnan Chang
---
fs/
fix f2fs.rst build warning.
Fixes: 151b1982be5d (f2fs: compress: add nocompress extensions support)
Signed-off-by: Fengnan Chang
---
Documentation/filesystems/f2fs.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/filesystems/f2fs.rst
b/Documentation
192
Signed-off-by: Fengnan Chang
Signed-off-by: Chao Yu
---
fs/f2fs/compress.c | 19 +++
fs/f2fs/data.c | 7 ---
fs/f2fs/f2fs.h | 2 ++
3 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index c1bf9ad4c
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
---
fs/f2fs
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 4 +++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 files
Chao Yu 于2021年10月13日周三 下午11:19写道:
>
> On 2021/10/9 19:27, Fengnan Chang wrote:
> > For now, overwrite file with direct io use inplace policy, but
> > not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
> > instead of stat_inc_inplace_blocks(sb, ).
> >
&g
192
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 12
fs/f2fs/data.c | 7 +++
fs/f2fs/f2fs.h | 1 +
3 files changed, 20 insertions(+)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index c1bf9ad4c220..c4f36ead6f17 100644
--- a/fs/f2fs/compress.c
+++ b
192
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index f4fd6c246c9a..267db5d3993e 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3025,6 +3025,9 @@ static int f2fs_write_cache_pages(struct address_sp
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
---
fs/f2fs
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 4 +++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 files
When mount with whint_mode option, it doesn't work, Fix it.
Fixes: d0b9e42ab615 (f2fs: introduce inmem curseg)
Reported-by: tanghuan
Signed-off-by: Fengnan Chang
---
fs/f2fs/super.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 4 +++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 files
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
---
fs/f2fs
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
Reviewed-by: Chao
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 7 ++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 7 ++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
Reviewed-by: Chao
From: Fengnan Chang
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan
From: Fengnan Chang
For now, overwrite file with direct io use inplace policy, but
not counted, fix it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 7 ++-
fs/f2fs/f2fs.h | 7 +++
2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index
From: Fengnan Chang
improve block allocation statistics:
1. fix missing inplace count in overwrite with direct io.
2. separate buffer and direct io.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 17 -
fs/f2fs/debug.c | 24 +++-
fs/f2fs/f2fs.h
On 2021/8/24 8:09, Chao Yu wrote:
On 2021/8/23 20:07, Fengnan Chang wrote:
On 2021/8/20 17:41, Chao Yu wrote:
On 2021/8/18 11:49, Fengnan Chang wrote:
On 2021/8/13 9:36, Chao Yu wrote:
On 2021/8/13 5:15, Jaegeuk Kim wrote:
On 08/06, Chao Yu wrote:
On 2021/7/29 20:25, Fengnan Chang
On 2021/8/20 17:41, Chao Yu wrote:
On 2021/8/18 11:49, Fengnan Chang wrote:
On 2021/8/13 9:36, Chao Yu wrote:
On 2021/8/13 5:15, Jaegeuk Kim wrote:
On 08/06, Chao Yu wrote:
On 2021/7/29 20:25, Fengnan Chang wrote:
For now, overwrite file with direct io use inplace policy, but not
Don't create discard thread when device doesn't support realtime discard
or user specifies nodiscard mount option.
Signed-off-by: Fengnan Chang
Signed-off-by: Yangtao Li
Reviewed-by: Chao Yu
---
fs/f2fs/f2fs.h| 1 +
fs/f2fs/segment.c | 25 +++--
fs/f2fs/supe
On 2021/8/13 9:36, Chao Yu wrote:
On 2021/8/13 5:15, Jaegeuk Kim wrote:
On 08/06, Chao Yu wrote:
On 2021/7/29 20:25, Fengnan Chang wrote:
For now, overwrite file with direct io use inplace policy, but not
counted, fix it.
IMO, LFS/SSR/IPU stats in debugfs was for buffered write, maybe we
Don't create discard thread when device not support realtime discard.
Signed-off-by: Fengnan Chang
Signed-off-by: Yangtao Li
---
fs/f2fs/f2fs.h| 1 +
fs/f2fs/segment.c | 25 +++--
fs/f2fs/super.c | 27 ++-
3 files changed, 46 insertions(
Don't create discard thread when device not support realtime discard.
Signed-off-by: Fengnan Chang
Signed-off-by: Yangtao Li
---
fs/f2fs/f2fs.h| 1 +
fs/f2fs/segment.c | 36
fs/f2fs/super.c | 31 ++-
3 files change
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 24 +++-
1 file changed, 19 insertions(+), 5 deletions
mistake, forget this...
On 2021/8/12 11:05, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 23
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 23 ++-
1 file changed, 18 insertions(+), 5 deletions
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 1 +
fs/f2fs/data.c | 21 -
fs/f2fs/f2fs.h
On 2021/8/11 10:50, Chao Yu wrote:
On 2021/8/11 10:32, Fengnan Chang wrote:
On 2021/8/11 10:29, Chao Yu wrote:
On 2021/8/11 10:17, Fengnan Chang wrote:
On 2021/8/11 10:07, Chao Yu wrote:
On 2021/8/10 11:39, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster
On 2021/8/11 10:29, Chao Yu wrote:
On 2021/8/11 10:17, Fengnan Chang wrote:
On 2021/8/11 10:07, Chao Yu wrote:
On 2021/8/10 11:39, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is
compressed or
not, so we can calculate valid blocks only for first page in
On 2021/8/11 10:07, Chao Yu wrote:
On 2021/8/10 11:39, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 1 +
fs/f2fs/data.c | 19 ++-
fs/f2fs/f2fs.h
11:46, Fengnan Chang wrote:
Hi chao:
Since cc.cluster_idx only will be set in f2fs_compress_ctx_add_page,
so for non-compressed cluster, cc.cluster_idx should always be NULL. it
means that the handling process of non-compressed cluster is same as
older.
Yup, so what I mean is why not ski
Chang wrote:
f2fs_read_multi_pages will handle,all truncate page will be zero out,
Whether partial or all page in cluster.
On 2021/7/22 21:47, Chao Yu wrote:
On 2021/7/22 11:25, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is
compressed or
not, so we can calculate
For compressed file, after release compress blocks, don't allow write
direct, but we should allow write direct after truncate to zero.
Reviewed-by: Chao Yu
Signed-off-by: Fengnan Chang
---
Documentation/filesystems/f2fs.rst | 7 +--
fs/f2fs/file.c | 8
2
For compressed file, after release compress blocks, don't allow write
direct, but we should allow write direct after truncate to zero.
Signed-off-by: Fengnan Chang
---
Documentation/filesystems/f2fs.rst | 7 +--
fs/f2fs/file.c | 8
2 files changed, 13 inser
Um.. I think this version should be ok.
Thanks.
On 2021/7/23 10:31, Fengnan Chang wrote:
For compressed file, after release compress blocks, don't allow write
direct, but we should allow write direct after truncate to zero.
Signed-off-by: Fengnan Chang
---
fs/f2fs/file.c | 8 +++
I'll check this later.
Thanks.
On 2021/8/6 8:57, Chao Yu wrote:
On 2021/7/23 11:18, Fengnan Chang wrote:
f2fs_read_multi_pages will handle,all truncate page will be zero out,
Whether partial or all page in cluster.
On 2021/7/22 21:47, Chao Yu wrote:
On 2021/7/22 11:25, Fengnan Chang
For now, overwrite file with direct io use inplace policy, but not
counted, fix it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 6 ++
fs/f2fs/f2fs.h | 2 ++
2 files changed, 8 insertions(+)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index d2cf48c5a2e4..60510acf91ec 100644
--- a/fs
Don't alloc new page array to replace old, just use old page array, try
to reduce one page array alloc and free when write compress page.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 18 --
fs/f2fs/f2fs.h | 1 +
2 files changed, 5 insertions(+), 14 dele
ok, it seems there is one place was missed.
Thanks.
On 2021/7/23 13:26, Chao Yu wrote:
On 2021/7/23 11:52, Fengnan Chang wrote:
Sorry, I didn't get your point, in my opinion, new_nr_cpages should
always little than nr_cpages, is this right? So we can just use cpages,
don't need to
Sorry, I didn't get your point, in my opinion, new_nr_cpages should
always little than nr_cpages, is this right? So we can just use cpages,
don't need to alloc new one.
Thanks.
On 2021/7/22 21:53, Chao Yu wrote:
On 2021/7/22 11:47, Fengnan Chang wrote:
Don't alloc new page a
f2fs_read_multi_pages will handle,all truncate page will be zero out,
Whether partial or all page in cluster.
On 2021/7/22 21:47, Chao Yu wrote:
On 2021/7/22 11:25, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid
Thanks for your advise, I'll send new version later.
On 2021/7/22 21:26, Chao Yu wrote:
On 2021/7/2 11:11, Fengnan Chang wrote:
We should allow write compress released file after truncate to zero.
Signed-off-by: Fengnan Chang
---
fs/f2fs/file.c | 7 +++
1 file changed, 7 inser
1 - 100 of 131 matches
Mail list logo