>
> Some my own previous thoughts about this strategy:
>
>  - If we allocate all memory and map these before I/Os, all inflight I/Os
>    will keep such temporary pages all the time until decompression is
>    finished. In contrast, if we allocate or reuse such pages just before
>    decompression, it would minimize the memory footprints.
>
>    I think it will impact the memory numbers at least on the very
>    low-ended devices with bslow storage. (I've seen f2fs has some big
>    mempool already)
>
>  - Many compression algorithms are not suitable in the softirq contexts,
>    also I vaguely remembered if softirq context lasts for > 2ms, it will
>    push into ksoftirqd instead so it's actually another process context.
>    And it may delay other important interrupt handling.
>
>  - Go back to the non-deterministic scheduling of workqueues. I guess it
>    may be just due to scheduling punishment due to a lot of CPU consuming
>    due to decompression before so the priority becomes low, but that is
>    just a pure guess. May be we need to use RT scheduling policy instead.
>
>    At least with WQ_HIGHPRI for dm-verity at least, but I don't find
>    WQ_HIGHPRI mark for dm-verity.
>
> Thanks,
> Gao Xiang

I totally understand what you are worried about. However, in the real
world, non-determinism from workqueues is more harsh than we expected.
As you know, reading I/Os in the system are critical paths most of the
time and now I/O variations with workqueue are too bad.

I also think it's better that we have RT scheduling like things here.
We could think about it more.

Thanks,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to