Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster,
the other pages just skip.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 1 +
fs/f2fs/data.c | 19 ++-
fs/f2fs/f2fs.h |
Um, in previous thought, consider of random read for non-compressed
cluster, I didn't handle non-compressed cluster, after your remind, I
think we can skip f2fs_is_compressed_cluster() check for sequential read
for non-compressed cluster.
On 2021/8/9 22:38, Chao Yu wrote:
On 2021/8/9 11:46, F
From: Daeho Jeong
Added a mount option to control block allocation mode for filesystem
developer to simulate filesystem fragmentation and after-GC situation
for experimental reasons to understand the filesystem behaviors well
under the severe condition. This supports "normal", "seg_random" and
"b
https://bugzilla.kernel.org/show_bug.cgi?id=214009
Nikolaos Bezirgiannis (bez...@gmail.com) changed:
What|Removed |Added
Status|ASSIGNED|RESOLVED
On 2021/8/9 11:46, Fengnan Chang wrote:
Hi chao:
Since cc.cluster_idx only will be set in f2fs_compress_ctx_add_page,
so for non-compressed cluster, cc.cluster_idx should always be NULL. it
means that the handling process of non-compressed cluster is same as older.
Yup, so what I mean is w
https://bugzilla.kernel.org/show_bug.cgi?id=214009
Chao Yu (c...@kernel.org) changed:
What|Removed |Added
Status|NEW |ASSIGNED
CC|
https://bugzilla.kernel.org/show_bug.cgi?id=214009
Bug ID: 214009
Summary: Compression has no real effect in disk usage
Product: File System
Version: 2.5
Kernel Version: 5.13
Hardware: All
OS: Linux
Tree: Main
On Sat, Aug 07, 2021 at 10:16:39AM -0600, Jens Axboe wrote:
> > /*
> > - * 8 best effort priority levels are supported
> > + * The RT and BE priority classes both support up to 8 priority levels.
> > */
> > -#define IOPRIO_BE_NR 8
> > +#define IOPRIO_NR_LEVELS 8
>
> That might not be a