On 2021/1/5 9:28, Daeho Jeong wrote:
From: Daeho Jeong <daehoje...@google.com>

Fixed null page reference when find_lock_page() fails in
redirty_blocks().

Signed-off-by: Daeho Jeong <daehoje...@google.com>
Reported-by: Colin Ian King <colin.k...@canonical.com>
Fixes: 5fdb322ff2c2 ("f2fs: add F2FS_IOC_DECOMPRESS_FILE and 
F2FS_IOC_COMPRESS_FILE")
---
  fs/f2fs/file.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 9e5275716be8..bf6682a52433 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -4060,8 +4060,10 @@ static int redirty_blocks(struct inode *inode, pgoff_t 
page_idx, int len)
for (i = 0; i < page_len; i++, redirty_idx++) {
                page = find_lock_page(mapping, redirty_idx);
-               if (!page)
+               if (!page) {
                        ret = -ENOENT;

ret = -ENOMEM;

+                       continue;

How about breaking the loop for out-of-memory case, because in such condition
we have less chance to dirty whole cluster due to no memory, and continue to
allocate pages for target file will make system suffer more memory pressure,
it will make many thing slower.

Thnaks,

+               }
                set_page_dirty(page);
                f2fs_put_page(page, 1);
                f2fs_put_page(page, 0);

Reply via email to