On Thu, 2023-06-08 at 17:41 -0400, Dave Wysochanski wrote:
> If a network filesystem using netfs implements a clamp_length()
> function, it can set subrequest lengths smaller than a page size.
> When we loop through the folios in netfs_rreq_unlock_folios() to
> set any folios to be written back, we need to make sure we only
> call folio_start_fscache() once for each folio.  Otherwise,
> this simple testcase:
>   mount -o fsc,rsize=1024,wsize=1024 127.0.0.1:/export /mnt/nfs
>   dd if=/dev/zero of=/mnt/nfs/file.bin bs=4096 count=1
>   1+0 records in
>   1+0 records out
>   4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0126359 s, 324 kB/s
>   cat /mnt/nfs/file.bin > /dev/null
> 
> will trigger an oops similar to the following:
> ...
>  page dumped because: VM_BUG_ON_FOLIO(folio_test_private_2(folio))
>  ------------[ cut here ]------------
>  kernel BUG at include/linux/netfs.h:44!
> ...
>  CPU: 5 PID: 134 Comm: kworker/u16:5 Kdump: loaded Not tainted 6.4.0-rc5
> ...
>  RIP: 0010:netfs_rreq_unlock_folios+0x68e/0x730 [netfs]
> ...
>  Call Trace:
>   <TASK>
>   netfs_rreq_assess+0x497/0x660 [netfs]
>   netfs_subreq_terminated+0x32b/0x610 [netfs]
>   nfs_netfs_read_completion+0x14e/0x1a0 [nfs]
>   nfs_read_completion+0x2f9/0x330 [nfs]
>   rpc_free_task+0x72/0xa0 [sunrpc]
>   rpc_async_release+0x46/0x70 [sunrpc]
>   process_one_work+0x3bd/0x710
>   worker_thread+0x89/0x610
>   kthread+0x181/0x1c0
>   ret_from_fork+0x29/0x50
> 
> Signed-off-by: Dave Wysochanski <dwyso...@redhat.com>
> ---
>  fs/netfs/buffered_read.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
> index 3404707ddbe7..0dafd970c1b6 100644
> --- a/fs/netfs/buffered_read.c
> +++ b/fs/netfs/buffered_read.c
> @@ -21,6 +21,7 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
>       pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1;
>       size_t account = 0;
>       bool subreq_failed = false;
> +     bool folio_started;

nit: I'd move this declaration inside the xas_for_each loop, and just
initialize it to false there.

>  
>       XA_STATE(xas, &rreq->mapping->i_pages, start_epage);
>  
> @@ -53,6 +54,7 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
>  
>               pg_end = folio_pos(folio) + folio_size(folio) - 1;
>  
> +             folio_started = false;
>               for (;;) {
>                       loff_t sreq_end;
>  
> @@ -60,8 +62,10 @@ void netfs_rreq_unlock_folios(struct netfs_io_request 
> *rreq)
>                               pg_failed = true;
>                               break;
>                       }
> -                     if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
> +                     if (!folio_started && 
> test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
>                               folio_start_fscache(folio);
> +                             folio_started = true;
> +                     }
>                       pg_failed |= subreq_failed;
>                       sreq_end = subreq->start + subreq->len - 1;
>                       if (pg_end < sreq_end)


The logic looks correct though.

Reviewed-by: Jeff Layton <jlay...@kernel.org>

--
Linux-cachefs mailing list
Linux-cachefs@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-cachefs

Reply via email to