On 2020/5/27 10:20, Sahitya Tummala wrote:
> In case a compressed file is getting overwritten, the current retry
> logic doesn't include the current page to be retried now as it sets
> the new start index as 0 and new end index as writeback_index - 1.
> This causes the corresponding cluster to be uncompressed and written
> as normal pages without compression. Fix this by allowing writeback to
> be retried for the current page as well (in case of compressed page
> getting retried due to index mismatch with cluster index). So that
> this cluster can be written compressed in case of overwrite.
> 
> Signed-off-by: Sahitya Tummala <stumm...@codeaurora.org>
> ---
>  fs/f2fs/data.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 4af5fcd..bfd1df4 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -3024,7 +3024,7 @@ static int f2fs_write_cache_pages(struct address_space 
> *mapping,
>       if ((!cycled && !done) || retry) {

IMO, we add retry logic in wrong place, you can see that cycled value is
zero only if wbc->range_cyclic is true, in that case writeback_index is valid.

However if retry is true and wbc->range_cyclic is false, then writeback_index
would be uninitialized variable.

Thoughts?

Thanks,

>               cycled = 1;
>               index = 0;
> -             end = writeback_index - 1;
> +             end = retry ? -1 : writeback_index - 1;
>               goto retry;
>       }
>       if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
> 

Reply via email to